All AFNI program -help files

This page auto-generated on Sat Mar 14 05:55:00 EST 2009



AFNI program: 1dFlagMotion
Usage: 1dFlagMotion [options] MotionParamsFile 

      Produces an list of time points that have more than a 
   user specified amount of motion relative to the previous 
   time point. 
 Options:
  -MaxTrans    maximum translation allowed in any direction 
 			defaults to 1.5mm 
  -MaxRot      maximum rotation allowed in any direction 
 			defaults to 1.25 degrees

++ Compile date = Mar 13 2009




AFNI program: 1dMarry
Usage: 1dMarry [options] file1 file2 ...

  Joins together 2 (or more) ragged-right .1D files, for use with
    3dDeconvolve -stim_times_AM2.
 **_OR_**
  Breaks up 1 married file into 2 (or more) single-valued files.

OPTIONS:
=======
 -sep abc  == Use the first character (e.g., 'a') as the separator
              between values 1 and 2, the second character (e.g., 'b')
              as the separator between values 2 and 3, etc.
            * These characters CANNOT be a blank, a tab, a digit,
              or a non-printable control character!
            * Default separator string is '*,' which will result
              in output similar to '3*4,5,6'

 -divorce  == Instead of marrying the files, assume that file1
              is already a married file: split time*value*value... tuples
              into separate files, and name them in the pattern
              'file2_A.1D' 'file2_B.1D' et cetera.

If not divorcing, the 'married' file is written to stdout, and
probably should be captured using a redirection such as '>'.

NOTES:
=====
* You cannot use column [...] or row {...} selectors on
    ragged-right .1D files, so don't even think about trying!
* The maximum number of values that can be married is 26.
    (No polygamy or polyandry jokes here, please.)
* For debugging purposes, with '-divorce', if 'file2' is '-',
    then all the divorcees are written directly to stdout.

-- RWCox -- written hastily in March 2007 -- hope I don't repent
         -- modified to deal with multiple marriages -- December 2008

++ Compile date = Mar 13 2009




AFNI program: 1dSEM
Usage: 1dSEM [options] -theta 1dfile -C 1dfile -psi 1dfile -DF nn.n
Computes path coefficients for connection matrix in Structural Equation
    Modeling (SEM)
 The program takes as input :
    1. A 1D file with an initial representation of the connection matrix
       with a 1 for each interaction component to be modeled and a 0 if
       if it is not to be modeled. This matrix should be PxP rows and column
    2. A 1D file of the C, correlation matrix, also with dimensions PxP
    3. A 1D file of the residual variance vector, psi
    4. The degrees of freedom, DF

    Output is printed to the terminal and may be redirected to a 1D file
    The path coefficient matrix is printed for each matrix computed
 Options:
   -theta file.1D = connection matrix 1D file with initial representation
   -C file.1D = correlation matrix 1D file
   -psi file.1D = residual variance vector 1D file
   -DF nn.n = degrees of freedom
   -max_iter n = maximum number of iterations for convergence (Default=10000).
    Values can range from 1 to any positive integer less than 10000.
   -nrand n = number of random trials before optimization (Default = 100)
   -limits m.mmm n.nnn = lower and upper limits for connection coefficients
    (Default = -1.0 to 1.0)
   -calccost = no modeling at all, just calculate the cost function for the
    coefficients as given in the theta file. This may be useful for verifying
    published results
   -verbose nnnnn = print info every nnnnn steps

 Model search options:
 Look for best model. The initial connection matrix file must follow these
   specifications. Each entry must be 0 for entries excluded from the model,
   1 for each required entry in the minimum model, 2 for each possible path
   to try.
   -tree_growth or 
   -model_search = search for best model by growing a model for one additional
    coefficient from the previous model for n-1 coefficients. If the initial
    theta matrix has no required coefficients, the initial model will grow from
    the best model for a single coefficient
   -max_paths n = maximum number of paths to include (Default = 1000)
   -stop_cost n.nnn = stop searching for paths when cost function is below
    this value (Default = 0.1)
   -forest_growth or 
   -grow_all = search over all possible models by comparing models at
    incrementally increasing number of path coefficients. This
    algorithm searches all possible combinations; for the number of coeffs
    this method can be exceptionally slow, especially as the number of
    coefficients gets larger, for example at n>=9.
   -leafpicker = relevant only for forest growth searches. Expands the search
    optimization to look at multiple paths to avoid local minimum. This method
    is the default technique for tree growth and standard coefficient searches
 This program uses a Powell optimization algorithm to find the connection
   coefficients for any particular model.

 References:
   Powell, MJD, "The NEWUOA software for unconstrained optimization without
    derivatives", Technical report DAMTP 2004/NA08, Cambridge University
    Numerical Analysis Group -- http://www.damtp.cam.ac.uk/user/na/reports.html

   Bullmore, ET, Horwitz, B, Honey, GD, Brammer, MJ, Williams, SCR, Sharma, T,
    How Good is Good Enough in Path Analysis of fMRI Data?
    NeuroImage 11, 289-301 (2000)

   Stein, JL, et al., A validated network of effective amygdala connectivity,
    NeuroImage (2007), doi:10.1016/j.neuroimage.2007.03.022

 The initial representation in the theta file is non-zero for each element
   to be modeled. The 1D file can have leading columns for labels that will
   be used in the output. Label rows must be commented with the # symbol
 If using any of the model search options, the theta file should have a '1' for
   each required coefficient, '0' for each excluded coefficient, '2' for an
   optional coefficient. Excluded coefficients are not modeled. Required
   coefficients are included in every computed model.

 N.B. - Connection directionality in the path connection matrices is from 
   column to row of the output connection coefficient matrices.

   Be very careful when interpreting those path coefficients.
   First of all, they are not correlation coefficients. Suppose we have a
   network with a path connecting from region A to region B. The meaning
   of the coefficient theta (e.g., 0.81) is this: if region A increases by 
   one standard deviation from its mean, region B would be expected to increase
   by 0.81 its own standard deviations from its own mean while holding all other
   relevant regional connections constant. With a path coefficient of -0.16, 
   when region A increases by one standard deviation from its mean, region B 
   would be expected to decrease by 0.16 its own standard deviations from its
   own mean while holding all other relevant regional connections constant.

   So theoretically speaking the range of the path coefficients can be anything,
   but most of the time they range from -1 to 1. To save running time, the
   default values for -limits are set with -1 and 1, but if the result hits
   the boundary, increase them and re-run the analysis.

 Examples:
   To confirm a specific model:
    1dSEM -theta inittheta.1D -C SEMCorr.1D -psi SEMvar.1D -DF 30
   To search models by growing from the best single coefficient model
     up to 12 coefficients
    1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \ 
    -limits -2 2 -nrand 100 -DF 30 -model_search -max_paths 12
   To search all possible models up to 8 coefficients:
    1dSEM -theta testthetas_ms.1D -C testcorr.1D -psi testpsi.1D \ 
    -nrand 10 -DF 30 -stop_cost 0.1 -grow_all -max_paths 8 | & tee testgrow.txt

   For more information, see http://afni.nimh.nih.gov/sscc/gangc/PathAna.html




AFNI program: 1dTsort
Usage: 1dTsort [options] file.1D
Sorts each column of the input 1D file and writes result to stdout.

Options
-------
 -inc     = sort into increasing order [default]
 -dec     = sort into decreasing order
 -flip    = transpose the file before OUTPUT
            * the INPUT can be transposed using file.1D\'
            * thus, to sort each ROW, do something like
               1dTsort -flip file.1D\' > sfile.1D

N.B.: Data will be read from standard input if the filename IS stdin,
      and will also be row/column transposed if the filename is stdin\'
      For example:
        1deval -num 100 -expr 'uran(1)' | 1dTsort stdin | 1dplot stdin


++ Compile date = Mar 13 2009




AFNI program: 1dUpsample
Program 1dUpsample:
Upsamples a 1D time series (along the column direction)
to a finer time grid.
Usage:  1dUpsample [options] n fred.1D > ethel.1D

Where 'n' is the upsample factor (integer from 2..32)

NOTES:
------
* Interpolation is done with 7th order polynomials.
* The only option is '-1' or '-one', to use 1st order
   polynomials instead (i.e., linear interpolation).
* Output is written to stdout.
* If you want to interpolate along the row direction,
   transpose before input, then transpose the output.
* Example:
   1dUpsample -1 3 '1D: 4 5 6' | 1dplot -stdin
* If the input has M time points, the output will
   have n*M time points.  The last n-1 of them
   will be past the end of the original time series.
* This program is a quick hack for Gang Chen.




AFNI program: 1dcat
Usage: 1dcat [-form option] a.1D b.1D ...
  where each file a.1D, b.1D, etc. is a 1D file.
  In the simplest form, a 1D file is an ASCII file of numbers
  arranged in rows and columns.
The row-by-row concatenation of the columns included in these  
files is written to stdout.

1dcat takes as input one or more 1D files, and writes out a 1D file
containing the side-by-side concatenation of all or a subset of the
columns from the input files.
All files must have the same number of rows.
For help on -form's usage, see ccalc's help for the option of the same name.
Example:
  Input file 1:
   1
   2
   3
   4
  Input file 2:
   5
   6
   7
   8

  1dcat data1.1D data2.1D > catout.1D
  Output file: 
   1 5
   2 6
   3 7
   4 8

For generic 1D file usage help, see '1dplot -help'

++ Compile date = Mar 13 2009




AFNI program: 1ddot
Usage: 1ddot [options] 1Dfile 1Dfile ...
- Prints out correlation matrix of the 1D files and
  their inverse correlation matrix.
- Output appears on stdout.

Options:
 -one  =  Make 1st vector be all 1's.
 -dem  =  Remove mean from all vectors (conflicts with '-one')
 -cov  =  Compute with covariance matrix instead of correlation
 -inn  =  Computed with inner product matrix instead
 -terse=  Output only the correlation or covariance matrix
          and without any of the garnish. 

++ Compile date = Mar 13 2009




AFNI program: 1deval
Usage: 1deval [options] -expr 'expression'
Evaluates an expression that may include columns of data
from one or more text files and writes the result to stdout.

* Any single letter from a-z can be used as the independent
   variable in the expression. Only a single column can be
   used for each variable.
* Unless specified using the '[]' notation (cf. 1dplot -help),
   only the first column of an input 1D file is used, and other
   columns are ignored.
* Only one column of output will be produced -- if you want to
   calculate a multi-column output file, you'll have to run 1deval
   separately for each column, and then glue the results together
   using program 1dcat.  [However, see the 1dcat example combined
   with the '-1D:' option, infra.]

Options:
--------
  -del d   = Use 'd' as the step for a single undetermined variable
               in the expression [default = 1.0]
  -start z = Start at value 'z' for a single undetermined variable
               in the expression [default = 0.0]
  -num n   = Evaluate the expression 'n' times.
               If -num is not used, then the length of an
               input time series is used.  If there are no
               time series input, then -num is required.
  -a q.1D  = Read time series file q.1D and assign it
               to the symbol 'a' (as in 3dcalc).
  -index i.1D = Read index column from file i.1D and
                 write it out as 1st column of output.
                 This option is useful when working with
                 surface data.
  -1D:     = Write output in the form of a single '1D:'
               string suitable for input on the command
               line of another program.
               [-1D: is incompatible with the -index option!]
Examples:
---------
 1deval -expr 'sin(2*PI*t)' -del 0.01 -num 101 > sin.1D
 1deval -expr 'a*b' -a fred.1D -b ethel.1D > ab.1D
 1deval -start 10 -num 90 -expr 'fift_p2t(0.001,n,2*n)' | 1dplot -xzero 10 -stdin
 1deval -x '1D: 1 4 9 16' -expr 'sqrt(x)'

Examples using '-1D:' as the output format:
-------------------------------------------
 1dplot `1deval -1D: -num 71 -expr 'cos(t/2)*exp(-t/19)'`
 1dcat `1deval -1D: -num 100 -expr 'cos(t/5)'` \
       `1deval -1D: -num 100 -expr 'sin(t/5)'` > sincos.1D
 3dTfitter -quiet -prefix -                                     \
           -RHS `1deval -1D: -num 30 -expr 'cos(t)*exp(-t/7)'`  \
           -LHS `1deval -1D: -num 30 -expr 'cos(t)'`            \
                `1deval -1D: -num 30 -expr 'sin(t)'`              

Notes:
------
* Program 3dcalc operates on 3D and 3D+time datasets in a similar way.
* Program ccalc can be used to evaluate a single numeric expression.
* If I had any sense, THIS program would have been called 1dcalc!
* For generic 1D file usage help, see '1dplot -help'
* For help with expression format, see '3dcalc -help', or type
   'help' when using ccalc in interactive mode.
* 1deval only produces a single column of output.  3dcalc can be
   tricked into doing multi-column 1D format output by treating
   a 1D file as a 3D dataset and auto-transposing it with \'
   For example:
     3dcalc -a '1D: 3 4 5 | 1 2 3'\' -expr 'cbrt(a)' -prefix -
   The input has 2 'columns' and so does the output.
   Note that the 1D 'file' is transposed on input to 3dcalc!
   This is essential, or 3dcalc will not treat the 1D file as
   a dataset, and the results will be very different.

-- RW Cox --

++ Compile date = Mar 13 2009




AFNI program: 1dfft
Usage: 1dfft [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with the absolute
value of the FFT of the input columns.  The length of the file
will be 1+(FFT length)/2.

Options:
  -ignore sss = Skip the first 'sss' lines in the input file.
                [default = no skipping]
  -use uuu    = Use only 'uuu' lines of the input file.
                [default = use them all, Frank]
  -nfft nnn   = Set FFT length to 'nnn'.
                [default = length of data (# of lines used)]
  -tocx       = Save Re and Im parts of transform in 2 columns.
  -fromcx     = Convert 2 column complex input into 1 column
                  real output.
  -hilbert    = When -fromcx is used, the inverse FFT will
                  do the Hilbert transform instead.
  -nodetrend  = Skip the detrending of the input.

Nota Bene:
 * Each input time series has any quadratic trend of the
     form 'a+b*t+c*t*t' removed before the FFT, where 't'
     is the line number.
 * The FFT length will be a power-of-2 times at most one
     factor of 3 and one factor of 5.  The smallest such
     length >= to the specified FFT length will be used.
 * If the FFT length is longer than the file length, the
     data is zero-padded to make up the difference.
 * Do NOT call the output of this program the Power Spectrum!
     That is something else entirely.
 * If 'outfile' is '-', the output appears on stdout.

++ Compile date = Mar 13 2009




AFNI program: 1dgrayplot
Usage: 1dgrayplot [options] tsfile
Graphs the columns of a *.1D type time series file to the screen,
sort of like 1dplot, but in grayscale.

Options:
 -install   = Install a new X11 colormap (for X11 PseudoColor)
 -ignore nn = Skip first 'nn' rows in the input file
                [default = 0]
 -flip      = Plot x and y axes interchanged.
                [default: data columns plotted DOWN the screen]
 -sep       = Separate scales for each column.
 -use mm    = Plot 'mm' points
                [default: all of them]
 -ps        = Don't draw plot in a window; instead, write it
              to stdout in PostScript format.
              N.B.: If you view this result in 'gv', you should
                    turn 'anti-alias' off, and switch to
                    landscape mode.

++ Compile date = Mar 13 2009




AFNI program: 1dmatcalc
Usage: 1dmatcalc [-verb] expression

Evaluate a space delimited RPN matrix-valued expression:

 * The operations are on a stack, each element of which is a
     real-valued matrix.
   * N.B.: This is a computer-science stack of separate matrices.
           If you want to join two matrices in separate files
           into one 'stacked' matrix, then you must use program
           1dcat to join them as columns, or the system program
           cat to join them as rows.
 * You can also save matrices by name in an internal buffer
     using the '=NAME' operation and then retrieve them later
     using just the same NAME.
 * You can read and write matrices from files stored in ASCII
     columns (.1D format) using the &read and &write operations.
 * The following 5 operations, input as a single string,
     '&read(V.1D) &read(U.1D) &transp * &write(VUT.1D)'
   - reads matrices V and U from disk (separately),
   - transposes U (on top of the stack) into U',
   - multiplies V and U' (the two matrices on top of the stack),
   - and writes matrix VU' out (the matrix left on the stack by '*').
 * Calculations are carried out in single precision ('float').
 * Operations mostly contain characters such as '&' and '*' that
   are special to Unix shells, so you'll probably need to put
   the arguments to this program in 'single quotes'.

 STACK OPERATIONS
 -----------------
 number     == push scalar value (1x1 matrix) on stack;
                 a number starts with a digit or a minus sign
 =NAME      == save matrix on top of stack as 'NAME'
 NAME       == push NAME-ed matrix onto top of stack;
                 names start with an alphabetic character
 &clear     == erase all named matrices (to save memory)
 &read(FF)  == read ASCII (.1D) file onto top of stack from file 'FF'
 &write(FF) == write top matrix to ASCII file to file 'FF';
                 if 'FF' == '-', writes to stdout
 &transp    == replace top matrix with its transpose
 &ident(N)  == push square identity matrix of order N onto stack
                 N is an fixed integer, OR
                 &R to indicate the row dimension of the
                    current top matrix, OR
                 &C to indicate the column dimension of the
                    current top matrix, OR
                 =X to indicate the (1,1) element of the
                    matrix named X
 &Psinv     == replace top matrix with its pseudo-inverse
                 [computed via SVD, not via inv(A'*A)*A']
 &Sqrt      == replace top matrix with its square root
                 [computed via Denman & Beavers iteration]
               N.B.: not all real matrices have real square
                 roots, and &Sqrt will fail if you push it
               N.B.: the matrix must be square!
 &Pproj     == replace top matrix with the projection onto
                 its column space; Input=A; Output = A*Psinv(A)
               N.B.: result P is symmetric and P*P=P
 &Qproj     == replace top matrix with the projection onto
                 the orthogonal complement of its column space
                 Input=A; Output=I-Pproj(A)
 *          == replace top 2 matrices with their product;
                   stack = [ ... C A B ] (where B = top) goes to
                   stack = [ ... C AB ]
                 if either of the top matrices is a 1x1 scalar,
                 then the result is the scalar multiplication of
                 the other matrix; otherwise, matrices must conform
 +          == replace top 2 matrices with sum A+B
 -          == replace top 2 matrices with difference A-B
 &dup       == push duplicate of top matrix onto stack
 &pop       == discard top matrix
 &swap      == swap top two matrices (A <-> B)




AFNI program: 1dnorm
Usage: 1dnorm [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with each column being
L_2 normalized (sum of squares = 1).
* If 'infile'  is '-', it will be read from stdin.
* If 'outfile' is '-', it will be written to stdout.

Options:
--------
 -norm1  = Normalize so sum of absolute values is 1 (L_1 norm)
 -normx  = So that max absolute value is 1 (L_infinity norm)

 -demean = Subtract each column's mean before normalizing
 -demed  = Subtract each column's median before normalizing
            [-demean and -demed are mutually exclusive!]

++ Compile date = Mar 13 2009




AFNI program: 1dplot
Usage: 1dplot [options] tsfile ...
Graphs the columns of a *.1D time series file to the X11 screen.

Options:
 -install   = Install a new X11 colormap.
 -sep       = Plot each column in a separate sub-graph.
 -one       = Plot all columns together in one big graph.
                [default = -sep]
 -sepscl    = Plot each column in a separate sub-graph
              and allow each sub-graph to have a different
              y-scale.  -sepscl is meaningless with -one!

           ** The '-norm' options below can be useful for
               plotting data with different value ranges on
               top of each other using '-one':
 -norm2     = Independently scale each time series plotted to
              have L_2 norm = 1 (sum of squares).
 -normx     = Independently scale each time series plotted to
              have max absolute value = 1 (L_infinity norm).
 -norm1     = Independently scale each time series plotted to
              have max sum of absolute values = 1 (L_1 norm).

 -x  X.1D   = Use for X axis the data in X.1D.
              Note that X.1D should have one column
              of the same length as the columns in tsfile. 
 N.B.: -x will override -dx and -xzero; -xaxis still has effects
 -xl10 X.1D = Use log10(X.1D) as the X axis.

 -dx xx     = Spacing between points on the x-axis is 'xx'
                [default = 1]
 -xzero zz  = Initial x coordinate is 'zz' [default = 0]
 -nopush    = Don't 'push' axes ranges outwards.
 -ignore nn = Skip first 'nn' rows in the input file
                [default = 0]
 -use mm    = Plot 'mm' points [default = all of them]
 -xlabel aa = Put string 'aa' below the x-axis
                [default = no axis label]
 -ylabel aa = Put string 'aa' to the left of the y-axis
                [default = no axis label]
 -plabel pp = Put string 'pp' atop the plot.
              Some characters, such as '_' have
              special formatting effects. You 
              can escape that with ''. For example:
        echo 2 4.5 -1 | 1dplot -plabel 'test_underscore' -stdin
              versus
        echo 2 4.5 -1 | 1dplot -plabel 'test\_underscore' -stdin
 -title pp = Same as -plabel, but only works with -ps/-png/-jpg options.

 -stdin     = Don't read from tsfile; instead, read from
              stdin and plot it. You cannot combine input
              from stdin and tsfile(s).  If you want to do so,
              use program 1dcat first.

 -ps        = Don't draw plot in a window; instead, write it
              to stdout in PostScript format.
             * If you view the result in 'gv', you should turn
               'anti-alias' off, and switch to landscape mode.
             * You can use the 'gs' program to convert PostScript
               to other formats; for example, a .bmp file:
            1dplot -ps ~/data/verbal/cosall.1D | 
             gs -r100 -sOutputFile=fred.bmp -sDEVICE=bmp256 -q -dBATCH -

 -jpg fname  } = Render plot to an image and save to a file named
 -jpeg fname } = 'fname', in JPEG mode or in PNG mode.
 -png fname  } = The default image width is 1024 pixels; to change
                 this value to 2000 pixels (say), do
                   setenv AFNI_1DPLOT_IMSIZE 2000
                 before running 1dplot.  Widths over 2000 may start
                 to look odd, and will run more slowly.
               * PNG files will be smaller than JPEG, and are
                 compressed without loss.
               * PNG output requires that the netpbm program
                 pnmtopng be installed somewhere in your PATH.

 -xaxis b:t:n:m    = Set the x-axis to run from value 'b' to
                     value 't', with 'n' major divisions and
                     'm' minor tic marks per major division.
                     For example:
                       -xaxis 0:100:5:20
                     Setting 'n' to 0 means no tic marks or labels.

 -yaxis b:t:n:m    = Similar to above, for the y-axis.  These
                     options override the normal autoscaling
                     of their respective axes.

 -ynames aa bb ... = Use the strings 'aa', 'bb', etc., as
                     labels to the right of the graphs,
                     corresponding to each input column.
                     These strings CANNOT start with the
                     '-' character.
               N.B.: Each separate string after '-ynames'
                     is taken to be a new label, until the
                     end of the command line or until some
                     string starts with a '-'.  In particular,
                     This means you CANNOT do something like
                       1dplot -ynames a b c file.1D
                     since the input filename 'file.1D' will
                     be used as a label string, not a filename.
                     Instead, you must put another option between
                     the end of the '-ynames' label list, OR you
                     can put a single '-' at the end of the label
                     list to signal its end:
                       1dplot -ynames a b c - file.1D

 -volreg           = Makes the 'ynames' be the same as the
                     6 labels used in plug_volreg for
                     Roll, Pitch, Yaw, I-S, R-L, and A-P
                     movements, in that order.

 -Dname=val        = Set environment variable 'name' to 'val'
                     for this run of the program only:
 1dplot -DAFNI_1DPLOT_THIK=0.01 -DAFNI_1DPLOT_COLOR_01=blue '1D:3 4 5 3 1 0'

You may also select a subset of columns to display using
a tsfile specification like 'fred.1D[0,3,5]', indicating
that columns #0, #3, and #5 will be the only ones plotted.
For more details on this selection scheme, see the output
of '3dcalc -help'.

Example: graphing a 'dfile' output by 3dvolreg, when TR=5:
   1dplot -volreg -dx 5 -xlabel Time 'dfile[1..6]'

You can also input more than one tsfile, in which case the files
will all be plotted.  However, if the files have different column
lengths, the shortest one will rule.

The colors for the line graphs cycle between black, red, green, and
blue.  You can alter these colors by setting Unix environment
variables of the form AFNI_1DPLOT_COLOR_xx -- cf. README.environment.
You can alter the thickness of the lines by setting the variable
AFNI_1DPLOT_THIK to a value between 0.00 and 0.05 -- the units are
fractions of the page size.

TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example:   3 5 7
               2 4 6
               0 3 3
               7 2 9
This example has 4 rows and 3 columns.  Each column is considered as
a timeseries in AFNI.  The convention is to store this type of data
in a filename ending in '.1D'.

** COLUMN SELECTION WITH [] **
When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
  'fred.1D[5]'            ==> use only column #5
  'fred.1D[5,9,17]'       ==> use columns #5, #9, and #17
  'fred.1D[5..8]'         ==> use columns #5, #6, #7, and #8
  'fred.1D[5..13(2)]'     ==> use columns #5, #7, #9, #11, and #13
Column indices start at 0.  You can use the character '$'
to indicate the last column in a 1D file; for example, you
can select every third column in a 1D file by using the selection list
  'fred.1D[0..$(3)]'      ==> use columns #0, #3, #6, #9, ....

** ROW SELECTION WITH {} **
Similarly, you select a subset of the rows using the '{...}' notation:
  'fred.1D{0..$(2)}'      ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
  'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....

** DIRECT INPUT OF DATA ON THE COMMAND LINE WITH 1D: **
You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
  '1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float.  For
example
   -a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
 * Spaces or commas can be used to separate values.
 * A '|' character can be used to start a new input "line":
   Try 1dplot '1D: 3 4 3 5 | 3 5 4 3'

** TRANSPOSITION WITH \' **
Finally, you can force most AFNI programs to tranpose a 1D file on
input by appending a single ' character at the end of the filename.
N.B.: Since the ' character is also special to the shell, you'll
      probably have to put a \ character before it. Examples:
       1dplot '1D: 3 2 3 4 | 2 3 4 3'   and
       1dplot '1D: 3 2 3 4 | 2 3 4 3'\'
When you have reached this level of understanding, you are ready to
take the AFNI Jedi Master test.  I won't insult you by telling you
where to find this examination.

++ Compile date = Mar 13 2009




AFNI program: 1dsum
Usage: 1dsum [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is an ASCII file of numbers arranged
in rows and columns. The sum of each column is written to stdout.

Options:
  -ignore nn = skip the first nn rows of each file
  -use    mm = use only mm rows from each file

++ Compile date = Mar 13 2009




AFNI program: 1dsvd
Usage: 1dsvd [options] 1Dfile 1Dfile ...
- Computes SVD of the matrix formed by the 1D file(s).
- Output appears on stdout; to save it, use '>' redirection.

OPTIONS:
 -one    = Make 1st vector be all 1's.
 -vmean  = Remove mean from each vector (can't be used with -one).
 -vnorm  = Make L2-norm of each vector = 1 before SVD.
           * The above 2 options mirror those in 3dpc.
 -cond   = Only print condition number (ratio of extremes)
 -sing   = Only print singular values
 -sort   = Sort singular values (descending) [the default]
 -nosort = Don't bother to sort the singular values
 -asort  = Sort singular values (ascending)
 -1Dleft = Only output left eigenvectors, in a .1D format
           This might be useful for reducing the number of
           columns in a design matrix.  The singular values
           are printed at the top of each vector column,
           as a '#...' comment line.
 -nev n  = If -1Dleft is used, '-nev' specifies only to output
           the first 'n' eigenvectors, rather than all of them.
EXAMPLE:
 1dsvd -vmean -vnorm -1Dleft fred.1D'[1..6]' | 1dplot -stdin
NOTES:
* Call the input n X m matrix [A] (n rows, m columns).  The SVD
  is the factorization [A] = [U] [S] [V]' ('=transpose), where
  - [U] is an n x m matrix (whose columns are the 'Left vectors')
  - [S] is a diagonal m x m matrix (the 'singular values')
  - [V] is an m x m matrix (whose columns are the 'Right vectors')
* The default output of the program is
  - An echo of the input [A]
  - The [U] matrix, each column headed by its singular value
  - The [V] matrix, each column headed by its singular value
    (please note that [V] is output, not [V]')
  - The pseudo-inverse of [A]
* This program was written simply for some testing purposes,
  but is distributed with AFNI because it might be useful-ish.
* Recall that you can transpose a .1D file on input by putting
  an escaped ' character after the filename.  For example,
    1dsvd fred.1D\'
  You can use this feature to get around the fact that there
  is no '-1Dright' option.  If you understand.
* For more information on the SVD, you can start at
  http://en.wikipedia.org/wiki/Singular_value_decomposition
* Author: Zhark the Algebraical (Linear).

++ Compile date = Mar 13 2009




AFNI program: 1dtranspose
Usage: 1dtranspose infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, but transposed.
You can use a column subvector selector list on infile, as in
  1dtranspose 'fred.1D[0,3,7]' ethel.1D

* This program may produce files with lines longer than a
   text editor can handle.
* If 'outfile' is '-' (or missing entirely), output goes to stdout.

++ Compile date = Mar 13 2009




AFNI program: 24swap
Usage: 24swap [options] file ...
Swaps bytes pairs and/or quadruples on the files listed.
Options:
 -q            Operate quietly
 -pattern pat  'pat' determines the pattern of 2 and 4
                 byte swaps.  Each element is of the form
                 2xN or 4xN, where N is the number of
                 bytes to swap as pairs (for 2x) or
                 as quadruples (for 4x).  For 2x, N must
                 be divisible by 2; for 4x, N must be
                 divisible by 4.  The whole pattern is
                 made up of elements separated by colons,
                 as in '-pattern 4x39984:2x0'.  If bytes
                 are left over after the pattern is used
                 up, the pattern starts over.  However,
                 if a byte count N is zero, as in the
                 example below, then it means to continue
                 until the end of file.

 N.B.: You can also use 1xN as a pattern, indicating to
         skip N bytes without any swapping.
 N.B.: A default pattern can be stored in the Unix
         environment variable AFNI_24SWAP_PATTERN.
         If no -pattern option is given, the default
         will be used.  If there is no default, then
         nothing will be done.
 N.B.: If there are bytes 'left over' at the end of the file,
         they are written out unswapped.  This will happen
         if the file is an odd number of bytes long.
 N.B.: If you just want to swap pairs, see program 2swap.
         For quadruples only, see program 4swap.
 N.B.: This program will overwrite the input file!
         You might want to test it first.

 Example: 24swap -pat 4x8:2x0 fred
          If fred contains 'abcdabcdabcdabcdabcd' on input,
          then fred has    'dcbadcbabadcbadcbadc' on output.



AFNI program: 2dImReg
++ 2dImReg: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
This program performs 2d image registration.  Image alignment is      
performed on a slice-by-slice basis for the input 3d+time dataset,    
relative to a user specified base image.                              
                                                                      
Usage:                                                                
2dImReg                                                               
-input fname           Filename of input 3d+time dataset to process   
-basefile fname        Filename of 3d+time dataset for base image     
                         (default = current input dataset)            
-base num              Time index for base image  (0 <= num)          
                         (default:  num = 3)                          
-nofine                Deactivate fine fit phase of image registration
                         (default:  fine fit is active)               
-fine blur dxy dphi    Set fine fit parameters                        
   where:                                                             
     blur = FWHM of blurring prior to registration (in pixels)        
               (default:  blur = 1.0)                                 
     dxy  = Convergence tolerance for translations (in pixels)        
               (default:  dxy  = 0.07)                                
     dphi = Convergence tolerance for rotations (in degrees)          
               (default:  dphi = 0.21)                                
                                                                      
-prefix pname     Prefix name for output 3d+time dataset              
                                                                      
-dprefix dname    Write files 'dname'.dx, 'dname'.dy, 'dname'.psi     
                    containing the registration parameters for each   
                    slice in chronological order.                     
                    File formats:                                     
                      'dname'.dx:    time(sec)   dx(pixels)           
                      'dname'.dy:    time(sec)   dy(pixels)           
                      'dname'.psi:   time(sec)   psi(degrees)         
-dmm              Change dx and dy output format from pixels to mm    
                                                                      
-rprefix rname    Write files 'rname'.oldrms and 'rname'.newrms       
                    containing the volume RMS error for the original  
                    and the registered datasets, respectively.        
                    File formats:                                     
                      'rname'.oldrms:   volume(number)   rms_error    
                      'rname'.newrms:   volume(number)   rms_error    
                                                                      
-debug            Lots of additional output to screen                 



AFNI program: 2swap
Usage: 2swap [-q] file ...
-- Swaps byte pairs on the files listed.
   The -q option means to work quietly.



AFNI program: 3dAFNIto3D
Usage: 3dAFNIto3D [options] dataset
Reads in an AFNI dataset, and writes it out as a 3D file.

OPTIONS:
 -prefix ppp  = Write result into file ppp.3D;
                  default prefix is same as AFNI dataset's.
 -bin         = Write data in binary format, not text.
 -txt         = Write data in text format, not binary.

NOTES:
* At present, all bricks are written out in float format.

++ Compile date = Mar 13 2009




AFNI program: 3dAFNItoANALYZE
Usage: 3dAFNItoANALYZE [-4D] [-orient code] aname dset
Writes AFNI dataset 'dset' to 1 or more ANALYZE 7.5 format
.hdr/.img file pairs (one pair for each sub-brick in the
AFNI dataset).  The ANALYZE files will be named
  aname_0000.hdr aname_0000.img   for sub-brick #0
  aname_0001.hdr aname_0001.img   for sub-brick #1
and so forth.  Each file pair will contain a single 3D array.

* If the AFNI dataset does not include sub-brick scale
  factors, then the ANALYZE files will be written in the
  datum type of the AFNI dataset.
* If the AFNI dataset does have sub-brick scale factors,
  then each sub-brick will be scaled to floating format
  and the ANALYZE files will be written as floats.
* The .hdr and .img files are written in the native byte
  order of the computer on which this program is executed.

Options
-------
-4D [30 Sep 2002]:
 If you use this option, then all the data will be written to
 one big ANALYZE file pair named aname.hdr/aname.img, rather
 than a series of 3D files.  Even if you only have 1 sub-brick,
 you may prefer this option, since the filenames won't have
 the '_0000' appended to 'aname'.

-orient code [19 Mar 2003]:
 This option lets you flip the dataset to a different orientation
 when it is written to the ANALYZE files.  The orientation code is
 formed as follows:
   The code must be 3 letters, one each from the
   pairs {R,L} {A,P} {I,S}.  The first letter gives
   the orientation of the x-axis, the second the
   orientation of the y-axis, the third the z-axis:
      R = Right-to-Left          L = Left-to-Right
      A = Anterior-to-Posterior  P = Posterior-to-Anterior
      I = Inferior-to-Superior   S = Superior-to-Inferior
   For example, 'LPI' means
      -x = Left       +x = Right
      -y = Posterior  +y = Anterior
      -z = Inferior   +z = Superior
 * For display in SPM, 'LPI' or 'RPI' seem to work OK.
    Be careful with this: you don't want to confuse L and R
    in the SPM display!
 * If you DON'T use this option, the dataset will be written
    out in the orientation in which it is stored in AFNI
    (e.g., the output of '3dinfo dset' will tell you this.)
 * The dataset orientation is NOT stored in the .hdr file.
 * AFNI and ANALYZE data are stored in files with the x-axis
    varying most rapidly and the z-axis most slowly.
 * Note that if you read an ANALYZE dataset into AFNI for
    display, AFNI assumes the LPI orientation, unless you
    set environment variable AFNI_ANALYZE_ORIENT.

++ Compile date = Mar 13 2009




AFNI program: 3dAFNItoMINC
Usage: 3dAFNItoMINC [options] dataset
Reads in an AFNI dataset, and writes it out as a MINC file.

OPTIONS:
 -prefix ppp  = Write result into file ppp.mnc;
                  default prefix is same as AFNI dataset's.
 -floatize    = Write MINC file in float format.
 -swap        = Swap bytes when passing data to rawtominc

NOTES:
* Multi-brick datasets are written as 4D (x,y,z,t) MINC
   files.
* If the dataset has complex-valued sub-bricks, then this
   program won't write the MINC file.
* If any of the sub-bricks have floating point scale
   factors attached, then the output will be in float
   format (regardless of the presence of -floatize).
* This program uses the MNI program 'rawtominc' to create
   the MINC file; rawtominc must be in your path.  If you
   don't have rawtominc, you must install the MINC tools
   software package from MNI.  (But if you don't have the
   MINC tools already, why do you want to convert to MINC
   format anyway?)
* At this time, you can find the MINC tools at
     ftp://ftp.bic.mni.mcgill.ca/pub/minc/
   You need the latest version of minc-*.tar.gz and also
   of netcdf-*.tar.gz.

-- RWCox - April 2002

++ Compile date = Mar 13 2009




AFNI program: 3dAFNItoNIFTI
++ 3dAFNItoNIFTI: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
Usage: 3dAFNItoNIFTI [options] dataset
Reads an AFNI dataset, writes it out as a NIfTI-1.1 file.

NOTES:
* The nifti_tool program can be used to manipulate
   the contents of a NIfTI-1.1 file.
* The input dataset can actually be in any input format
   that AFNI can read directly (e.g., MINC-1).
* There is no 3dNIFTItoAFNI program, since AFNI programs
   can directly read .nii files.  If you wish to make such
   a conversion anyway, one way to do so is like so:
     3dcalc -a ppp.nii -prefix ppp -expr 'a'

OPTIONS:
  -prefix ppp = Write the NIfTI-1.1 file as 'ppp.nii'.
                  Default: the dataset's prefix is used.
                * You can use 'ppp.hdr' to output a 2-file
                  NIfTI-1.1 file pair 'ppp.hdr' & 'ppp.img'.
                * If you want a compressed file, try
                  using a prefix like 'ppp.nii.gz'.
                * Setting the Unix environment variable
                  AFNI_AUTOGZIP to YES will result in
                  all output .nii files being gzip-ed.
  -verb       = Be verbose = print progress messages.
                  Repeating this increases the verbosity
                  (maximum setting is 3 '-verb' options).
  -float      = Force the output dataset to be 32-bit
                  floats.  This option should be used when
                  the input AFNI dataset has different
                  float scale factors for different sub-bricks,
                  an option that NIfTI-1.1 does not support.

The following options affect the contents of the AFNI extension
field that is written by default into the NIfTI-1.1 header:

  -pure       = Do NOT write an AFNI extension field into
                  the output file.  Only use this option if
                  needed.  You can also use the 'nifti_tool'
                  program to strip extensions from a file.
  -denote     = When writing the AFNI extension field, remove
                  text notes that might contain subject
                  identifying information.
  -oldid      = Give the new dataset the input dataset's
                  AFNI ID code.
  -newid      = Give the new dataset a new AFNI ID code, to
                  distinguish it from the input dataset.
     **** N.B.:  -newid is now the default action.

++ Compile date = Mar 13 2009




AFNI program: 3dAFNItoNIML
Usage: 3dAFNItoNIML [options] dset
 Dumps AFNI dataset header information to stdout in NIML format.
 Mostly for debugging and testing purposes!

 OPTIONS:
  -data          == Also put the data into the output (will be huge).
  -tcp:host:port == Instead of stdout, send the dataset to a socket.
                    (implies '-data' as well)

-- RWCox - Mar 2005

++ Compile date = Mar 13 2009




AFNI program: 3dAFNItoRaw
Usage: 3dAFNItoRaw [options] dataset
Convert an AFNI brik file with multiple sub-briks to a raw file with
  each sub-brik voxel concatenated voxel-wise.
For example, a dataset with 3 sub-briks X,Y,Z with elements x1,x2,x3,...,xn,
  y1,y2,y3,...,yn and z1,z2,z3,...,zn will be converted to a raw dataset with
  elements x1,y1,z1, x2,y2,z2, x3,y3,z3, ..., xn,yn,zn 
The dataset is kept in the original data format (float/short/int)
Options:
  -output / -prefix = name of the output file (not an AFNI dataset prefix)
    the default output name will be rawxyz.dat

  -datum float = force floating point output. Floating point forced if any
    sub-brik scale factors not equal to 1.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dANALYZEtoAFNI
** DON'T USE THIS PROGRAM!  REALLY!
USE 3dcopy OR to3d INSTEAD.

IF YOU CHOOSE TO USE IT ANYWAY, PERHAPS
BECAUSE IT WORKS BETTER ON YOUR 12th
CENTURY PLANTAGENET ANALYZE FILES,
ADD THE OPTION -OK TO YOUR COMMAND
LINE.

Usage: 3dANALYZEtoAFNI [options] file1.hdr file2.hdr ...
This program constructs a 'volumes' stored AFNI dataset
from the ANALYZE-75 files file1.img file2.img ....
In this type of dataset, there is only a .HEAD file; the
.BRIK file is replaced by the collection of .img files.
- Other AFNI programs can read (but not write) this type
  of dataset.
- The advantage of using this type of dataset vs. one created
   with to3d is that you don't have to duplicate the image data
   into a .BRIK file, thus saving disk space.
- The disadvantage of using 'volumes' for a multi-brick dataset
   is that all the .img files must be kept with the .HEAD file
   if you move the dataset around.
- The .img files must be in the same directory as the .HEAD file.
- Note that you put the .hdr files on the command line, but it is
   the .img files that will be named in the .HEAD file.
- After this program is run, you must keep the .img files with
   the output .HEAD file.  AFNI doesn't need the .hdr files, but
   other programs (e.g., FSL, SPM) will want them as well.

Options:
 -prefix ppp   = Save the dataset with the prefix name 'ppp'.
                  [default='a2a']
 -view vvv     = Save the dataset in the 'vvv' view, where
                  'vvv' is one of 'orig', 'acpc', or 'tlrc'.
                  [default='orig']

 -TR ttt       = For multi-volume datasets, create it as a
                  3D+time dataset with TR set to 'ttt'.
 -fbuc         = For multi-volume datasets, create it as a
                  functional bucket dataset.
 -abuc         = For multi-volume datasets, create it as an
                  anatomical bucket dataset.
   ** If more than one ANALYZE file is input, and none of the
       above options is given, the default is as if '-TR 1s'
       was used.
   ** For single volume datasets (1 ANALYZE file input), the
       default is '-abuc'.

 -geomparent g = Use the .HEAD file from dataset 'g' to set
                  the geometry of this dataset.
   ** If you don't use -geomparent, then the following options
       can be used to specify the geometry of this dataset:
 -orient code  = Tells the orientation of the 3D volumes.  The code
                  must be 3 letters, one each from the pairs {R,L}
                  {A,P} {I,S}.  The first letter gives the orientation
                  of the x-axis, the second the orientation of the
                  y-axis, the third the z-axis:
                   R = right-to-left         L = left-to-right
                   A = anterior-to-posterior P = posterior-to-anterior
                   I = inferior-to-superior  S = superior-to-inferior
 -zorigin dz   = Puts the center of the 1st slice off at the
                  given distance ('dz' in mm).  This distance
                  is in the direction given by the corresponding
                  letter in the -orient code.  For example,
                    -orient RAI -zorigin 30
                  would set the center of the first slice at
                  30 mm Inferior.
   ** If the above options are NOT used to specify the geometry
       of the dataset, then the default is '-orient RAI', and the
       z origin is set to center the slices about z=0.

 It is likely that you will want to patch up the .HEAD file using
 program 3drefit.

 -- RWCox - June 2002.


** DON'T USE THIS PROGRAM!  REALLY!
USE 3dcopy OR to3d INSTEAD.

IF YOU CHOOSE TO USE IT ANYWAY, PERHAPS
BECAUSE IT WORKS BETTER ON YOUR 12th
CENTURY PLANTAGENET ANALYZE FILES,
ADD THE OPTION -OK TO YOUR COMMAND
LINE.-- KRH - April 2005.




AFNI program: 3dANOVA
++ 3dANOVA: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs single factor Analysis of Variance (ANOVA)
on 3D datasets

---------------------------------------------------------------

Usage:
-----

3dANOVA
   -levels r                   : r = number of factor levels

   -dset 1 filename            : data set for factor level 1
         . . .. . .
   -dset 1 filename              data set for factor level 1
         . . .. . .
   -dset r filename              data set for factor level r
         . . .. . .
   -dset r filename              data set for factor level r

  [-voxel num]                 : screen output for voxel # num

  [-diskspace]                 : print out disk space required for
                                 program execution

  [-mask mset]                 : use sub-brick #0 of dataset 'mset'
                                 to define which voxels to process

  [-debug level]               : request extra output

The following commands generate individual AFNI 2-sub-brick datasets:
  (In each case, output is written to the file with the specified
   prefix file name.)

  [-ftr prefix]                : F-statistic for treatment effect

  [-mean i prefix]             : estimate of factor level i mean

  [-diff i j prefix]           : difference between factor levels

  [-contr c1...cr prefix]      : contrast in factor levels

Modified ANOVA computation options:    (December, 2005)

     ** For details, see http://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html

[-old_method]       request to perform ANOVA using the previous
                    functionality (requires -OK, also)

[-OK]               confirm you understand that contrasts that
                    do not sum to zero have inflated t-stats, and
                    contrasts that do sum to zero assume sphericity
                    (to be used with -old_method)

[-assume_sph]       assume sphericity (zero-sum contrasts, only)

                    This allows use of the old_method for
                    computing contrasts which sum to zero (this
                    includes diffs, for instance).  Any contrast
                    that does not sum to zero is invalid, and
                    cannot be used with this option (such as
                    ameans).

The following command generates one AFNI 'bucket' type dataset:

  [-bucket prefix]             : create one AFNI 'bucket' dataset whose
                                 sub-bricks are obtained by
                                 concatenating the above output files;
                                 the output 'bucket' is written to file
                                 with prefix file name

N.B.: For this program, the user must specify 1 and only 1 sub-brick
      with each -dset command. That is, if an input dataset contains
      more than 1 sub-brick, a sub-brick selector must be used,
      e.g., -dset 2 'fred+orig[3]'

Example of 3dANOVA:
------------------

 Example is based on a study with one factor (independent variable)
 called 'Pictures', with 3 levels:
        (1) Faces, (2) Houses, and (3) Donuts

 The ANOVA is being conducted on the data of subjects Fred and Ethel:

 3dANOVA -levels 3                     \
         -dset 1 fred_Faces+tlrc       \
         -dset 1 ethel_Faces+tlrc      \
                                       \
         -dset 2 fred_Houses+tlrc      \
         -dset 2 ethel_Houses+tlrc     \
                                       \
         -dset 3 fred_Donuts+tlrc      \
         -dset 3 ethel_Donuts+tlrc     \
                                       \
         -ftr Pictures                 \
         -mean 1 Faces                 \
         -mean 2 Houses                \
         -mean 3 Donuts                \
         -diff 1 2 FvsH                \
         -diff 2 3 HvsD                \
         -diff 1 3 FvsD                \
         -contr  1  1 -1 FHvsD         \
         -contr -1  1  1 FvsHD         \
         -contr  1 -1  1 FDvsH         \
         -bucket fred_n_ethel_ANOVA

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
---------------------------------------------------
Also see HowTo#5 - Group Analysis on the AFNI website:
http://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml

-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers.  This truncantion might cause significant errors.
If you receive warnings that look like this:
  *+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program.  For convenience, you can do this
on the command line, as in
  3dANOVA -DAFNI_FLOATIZE=YES ... other options ... 
Also see the following links:
 http://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
 http://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html

++ Compile date = Mar 13 2009




AFNI program: 3dANOVA2
++ 3dANOVA: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs a two-factor Analysis of Variance (ANOVA)
on 3D datasets

-----------------------------------------------------------

Usage:

   3dANOVA2
      -type k              : type of ANOVA model to be used:
                              k=1  fixed effects model  (A and B fixed)    
                              k=2  random effects model (A and B random)   
                              k=3  mixed effects model  (A fixed, B random)

      -alevels a           : a = number of levels of factor A

      -blevels b           : b = number of levels of factor B

      -dset 1 1 filename   : data set for level 1 of factor A
                                      and level 1 of factor B
            . . .                           . . .
      -dset i j filename   : data set for level i of factor A
                                      and level j of factor B
            . . .                           . . .
      -dset a b filename   : data set for level a of factor A
                                      and level b of factor B

     [-voxel num]          : screen output for voxel # num

     [-diskspace]          : print out disk space required for
                             program execution

     [-mask mset]          : use sub-brick #0 of dataset 'mset'
                             to define which voxels to process


   The following commands generate individual AFNI 2-sub-brick datasets:
  (In each case, output is written to the file with the specified
   prefix file name.)

     [-ftr prefix]         : F-statistic for treatment effect

     [-fa prefix]          : F-statistic for factor A effect

     [-fb prefix]          : F-statistic for factor B effect

     [-fab prefix]         : F-statistic for interaction

     [-amean i prefix]     : estimate mean of factor A level i

     [-bmean j prefix]     : estimate mean of factor B level j

     [-xmean i j prefix]   : estimate mean of cell at level i of factor A,
                                                      level j of factor B

     [-adiff i j prefix]   : difference between levels i and j of factor A

     [-bdiff i j prefix]   : difference between levels i and j of factor B

     [-xdiff i j k l prefix]     : difference between cell mean at A=i,B=j
                                                  and cell mean at A=k,B=l

     [-acontr c1 ... ca prefix]  : contrast in factor A levels

     [-bcontr c1 ... cb prefix]  : contrast in factor B levels

     [-xcontr c11 ... c1b c21 ... c2b  ...  ca1 ... cab  prefix]
                                 : contrast in cell means


The following command generates one AFNI 'bucket' type dataset:

     [-bucket prefix]      : create one AFNI 'bucket' dataset whose
                             sub-bricks are obtained by concatenating
                             the above output files; the output 'bucket'
                             is written to file with prefix file name

Modified ANOVA computation options:    (December, 2005)

     ** These options apply to model type 3, only.
        For details, see http://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html

     [-old_method]        : request to perform ANOVA using the previous
                            functionality (requires -OK, also)

     [-OK]                : confirm you understand that contrasts that
                            do not sum to zero have inflated t-stats, and
                            contrasts that do sum to zero assume sphericity
                            (to be used with -old_method)

     [-assume_sph]        : assume sphericity (zero-sum contrasts, only)

                            This allows use of the old_method for
                            computing contrasts which sum to zero (this
                            includes diffs, for instance).  Any contrast
                            that does not sum to zero is invalid, and
                            cannot be used with this option (such as
                            ameans).

----------------------------------------------------------

 Example of 3dANOVA2:

      Example is based on a study with a 3 x 4 mixed factorial design:

              Factor 1 - DONUTS has 3 levels:
                         (1) chocolate, (2) glazed, (3) sugar

              Factor 2 - SUBJECTS, of which there are 4 in this analysis:
                         (1) fred, (2) ethel, (3) lucy, (4) ricky

 3dANOVA2 -type 3 -alevels 3 -blevels 4   \
          -dset 1 1 fred_choc+tlrc        \
          -dset 2 1 fred_glaz+tlrc        \
          -dset 3 1 fred_sugr+tlrc        \
          -dset 1 2 ethel_choc+tlrc       \
          -dset 2 2 ethel_glaz+tlrc       \
          -dset 3 2 ethel_sugr+tlrc       \
          -dset 1 3 lucy_choc+tlrc        \
          -dset 2 3 lucy_glaz+tlrc        \
          -dset 3 3 lucy_sugr+tlrc        \
          -dset 1 3 ricky_choc+tlrc       \
          -dset 2 3 ricky_glaz+tlrc       \
          -dset 3 3 ricky_sugr+tlrc       \
          -amean 1 Chocolate              \
          -amean 2 Glazed                 \
          -amean 3 Sugar                  \
          -adiff 1 2 CvsG                 \
          -adiff 2 3 GvsS                 \
          -adiff 1 3 CvsS                 \
          -acontr 1 1 -2 CGvsS            \
          -acontr -2 1 1 CvsGS            \
          -acontr 1 -2 1 CSvsG            \
          -fa Donuts                      \
          -bucket ANOVA_results

 The -bucket option will place all of the 3dANOVA2 results (i.e., main
 effect of DONUTS, means for each of the 3 levels of DONUTS, and
 contrasts between the 3 levels of DONUTS) into one big dataset with
 multiple sub-bricks called ANOVA_results+tlrc.

-----------------------------------------------------------


N.B.: For this program, the user must specify 1 and only 1 sub-brick
      with each -dset command. That is, if an input dataset contains
      more than 1 sub-brick, a sub-brick selector must be used, e.g.:
      -dset 2 4 'fred+orig[3]'

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

Also see HowTo #5: Group Analysis on the AFNI website:
 http://afni.nimh.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml

-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers.  This truncantion might cause significant errors.
If you receive warnings that look like this:
  *+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program.  For convenience, you can do this
on the command line, as in
  3dANOVA -DAFNI_FLOATIZE=YES ... other options ... 
Also see the following links:
 http://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
 http://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html

++ Compile date = Mar 13 2009




AFNI program: 3dANOVA3
This program performs three-factor ANOVA on 3D data sets.           

Usage: 
3dANOVA3 
-type  k          type of ANOVA model to be used:
                         k = 1   A,B,C fixed;          AxBxC
                         k = 2   A,B,C random;         AxBxC
                         k = 3   A fixed; B,C random;  AxBxC
                         k = 4   A,B fixed; C random;  AxBxC
                         k = 5   A,B fixed; C random;  AxB,BxC,C(A)

-alevels a                     a = number of levels of factor A
-blevels b                     b = number of levels of factor B
-clevels c                     c = number of levels of factor C
-dset 1 1 1 filename           data set for level 1 of factor A
                                        and level 1 of factor B
                                        and level 1 of factor C
 . . .                           . . .

-dset i j k filename           data set for level i of factor A
                                        and level j of factor B
                                        and level k of factor C
 . . .                           . . .

-dset a b c filename           data set for level a of factor A
                                        and level b of factor B
                                        and level c of factor C

[-voxel num]                   screen output for voxel # num
[-diskspace]                   print out disk space required for
                                  program execution

[-mask mset]                   use sub-brick #0 of dataset 'mset'
                               to define which voxels to process


The following commands generate individual AFNI 2 sub-brick datasets:
  (In each case, output is written to the file with the specified
   prefix file name.)

[-fa prefix]                F-statistic for factor A effect
[-fb prefix]                F-statistic for factor B effect
[-fc prefix]                F-statistic for factor C effect
[-fab prefix]               F-statistic for A*B interaction
[-fac prefix]               F-statistic for A*C interaction
[-fbc prefix]               F-statistic for B*C interaction
[-fabc prefix]              F-statistic for A*B*C interaction

[-amean i prefix]           estimate of factor A level i mean
[-bmean i prefix]           estimate of factor B level i mean
[-cmean i prefix]           estimate of factor C level i mean
[-xmean i j k prefix]       estimate mean of cell at factor A level i,
                               factor B level j, factor C level k

[-adiff i j prefix]         difference between factor A levels i and j
                               (with factors B and C collapsed)
[-bdiff i j prefix]         difference between factor B levels i and j
                               (with factors A and C collapsed)
[-cdiff i j prefix]         difference between factor C levels i and j
                               (with factors A and B collapsed)
[-xdiff i j k l m n prefix] difference between cell mean at A=i,B=j,
                               C=k, and cell mean at A=l,B=m,C=n

[-acontr c1...ca prefix]    contrast in factor A levels
                               (with factors B and C collapsed)
[-bcontr c1...cb prefix]    contrast in factor B levels
                               (with factors A and C collapsed)
[-ccontr c1...cc prefix]    contrast in factor C levels
                               (with factors A and B collapsed)

[-aBcontr c1 ... ca : j prefix]   2nd order contrast in A, at fixed
                                     B level j (collapsed across C)
[-Abcontr i : c1 ... cb prefix]   2nd order contrast in B, at fixed
                                     A level i (collapsed across C)

[-aBdiff i_1 i_2 : j prefix] difference between levels i_1 and i_2 of
                               factor A, with factor B fixed at level j

[-Abdiff i : j_1 j_2 prefix] difference between levels j_1 and j_2 of
                               factor B, with factor A fixed at level i

[-abmean i j prefix]         mean effect at factor A level i and
                               factor B level j

The following command generates one AFNI 'bucket' type dataset:

[-bucket prefix]         create one AFNI 'bucket' dataset whose
                           sub-bricks are obtained by concatenating
                           the above output files; the output 'bucket'
                           is written to file with prefix file name

Modified ANOVA computation options:    (December, 2005)

     ** These options apply to model types 4 and 5, only.
        For details, see http://afni.nimh.nih.gov/sscc/gangc/ANOVA_Mod.html

[-old_method]       request to perform ANOVA using the previous
                    functionality (requires -OK, also)

[-OK]               confirm you understand that contrasts that
                    do not sum to zero have inflated t-stats, and
                    contrasts that do sum to zero assume sphericity
                    (to be used with -old_method)

[-assume_sph]       assume sphericity (zero-sum contrasts, only)

                    This allows use of the old_method for
                    computing contrasts which sum to zero (this
                    includes diffs, for instance).  Any contrast
                    that does not sum to zero is invalid, and
                    cannot be used with this option (such as
                    ameans).

-----------------------------------------------------------------
example: "classic" houses/faces/donuts for 4 subjects (2 genders)
         (level sets are gender (M/W), image (H/F/D), and subject)

    Note: factor C is really subject within gender (since it is
          nested).  There are 4 subjects in this example, and 2
          subjects per gender.  So clevels is 2.

    3dANOVA3 -type 5                            \
        -alevels 2                              \
        -blevels 3                              \
        -clevels 2                              \
        -dset 1 1 1 man1_houses+tlrc            \
        -dset 1 2 1 man1_faces+tlrc             \
        -dset 1 3 1 man1_donuts+tlrc            \
        -dset 1 1 2 man2_houses+tlrc            \
        -dset 1 2 2 man2_faces+tlrc             \
        -dset 1 3 2 man2_donuts+tlrc            \
        -dset 2 1 1 woman1_houses+tlrc          \
        -dset 2 2 1 woman1_faces+tlrc           \
        -dset 2 3 1 woman1_donuts+tlrc          \
        -dset 2 1 2 woman2_houses+tlrc          \
        -dset 2 2 2 woman2_faces+tlrc           \
        -dset 2 3 2 woman2_donuts+tlrc          \
        -adiff   1 2           MvsW             \
        -bdiff   2 3           FvsD             \
        -bcontr -0.5 1 -0.5    FvsHD            \
        -aBcontr 1 -1 : 1      MHvsWH           \
        -aBdiff  1  2 : 1      same_as_MHvsWH   \
        -Abcontr 2 : 0 1 -1    WFvsWD           \
        -Abdiff  2 : 2 3       same_as_WFvsWD   \
        -Abcontr 2 : 1 7 -4.2  goofy_example    \
        -bucket donut_anova


N.B.: For this program, the user must specify 1 and only 1 sub-brick
      with each -dset command. That is, if an input dataset contains
      more than 1 sub-brick, a sub-brick selector must be used, e.g.:
      -dset 2 4 5 'fred+orig[3]'

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
-------------------------------------------------------------------------
STORAGE FORMAT:
---------------
The default output format is to store the results as scaled short
(16-bit) integers.  This truncantion might cause significant errors.
If you receive warnings that look like this:
  *+ WARNING: TvsF[0] scale to shorts misfit = 8.09% -- *** Beware
then you can force the results to be saved in float format by
defining the environment variable AFNI_FLOATIZE to be YES
before running the program.  For convenience, you can do this
on the command line, as in
  3dANOVA -DAFNI_FLOATIZE=YES ... other options ... 
Also see the following links:
 http://afni.nimh.nih.gov/pub/dist/doc/program_help/common_options.html
 http://afni.nimh.nih.gov/pub/dist/doc/program_help/README.environment.html

++ Compile date = Mar 13 2009




AFNI program: 3dAcost

  *** Program 3dAcost is no longer available.
  *** Use '3dAllineate -allcostX' instead;
  *** See the output of '3dAllineate -HELP' for more information.




AFNI program: 3dAllineate
Usage: 3dAllineate [options] sourcedataset

Program to align one dataset (the 'source') to a base dataset.
Options are available to control:
 ++ How the matching between the source and the base is computed
    (i.e., the 'cost functional' measuring image mismatch).
 ++ How the resliced source is interpolated to the base space.
 ++ The complexity of the spatial transformation ('warp') used.
 ++ And many technical options to control the process in detail,
    if you know what you are doing (or just like to play around).

====
NOTE: If you want to align EPI volumes to a T1-weighted structural
====  volume, the script align_epi_anat.py is recommended.  It will
      use 3dAllineate in the recommended way for this type of problem.
 -->> This script can also be used for other alignment purposes, such
      as T1-weighted alignment between field strengths using the
      '-lpa' cost functional.  Investigate align_epi_anat.py
      to see if it will do what you need -- you might make your life
      a little easier and nicer.

OPTIONS:
=======
 -base bbb   = Set the base dataset to be the #0 sub-brick of 'bbb'.
               If no -base option is given, then the base volume is
               taken to be the #0 sub-brick of the source dataset.
               (Base must be stored as floats, shorts, or bytes.)

 -source ttt = Read the source dataset from 'ttt'.  If no -source
   *OR*        (or -input) option is given, then the source dataset
 -input ttt    is the last argument on the command line.
               (Source must be stored as floats, shorts, or bytes.)

  * NOTA BENE: The base and source dataset do NOT have to be defined *
  *            on the same 3D grids; the alignment process uses the  *
  *            coordinate systems defined in the dataset headers to  *
  *            make the match between spatial locations.             *
  *       -->> However, this coordinate-based matching requires that *
  *            image volumes be defined on roughly the same patch of *
  *            of (x,y,z) space, in order to find a decent starting  *
  *            point for the transformation.  You might need to use  *
  *            the script @Align_Centers to do this, if the 3D       *
  *            spaces occupied by the images do not overlap much.    *
  *       -->> Or the '-cmass' option to this program might be       *
  *            sufficient to solve this problem, maybe.              *

 -prefix ppp = Output the resulting dataset to file 'ppp'.  If this
   *OR*        option is NOT given, no dataset will be output!  The
 -out ppp      transformation matrix to align the source to the base will
               be estimated, but not applied.  You can save the matrix
               for later use using the '-1Dmatrix_save' option.
        *N.B.: By default, the new dataset is computed on the grid of the
                base dataset; see the '-master' and/or the '-mast_dxyz'
                options to change this grid.
        *N.B.: If 'ppp' is 'NULL', then no output dataset will be produced.
                This option is for compatibility with 3dvolreg.

 -floatize   = Write result dataset as floats.  Internal calculations
 -float        are all done on float copies of the input datasets.
               [Default=convert output dataset to data format of  ]
               [        source dataset; if the source dataset was ]
               [        shorts with a scale factor, then the new  ]
               [        dataset will get a scale factor as well;  ]
               [        if the source dataset was shorts with no  ]
               [        scale factor, the result will be unscaled.]

 -1Dparam_save ff   = Save the warp parameters in ASCII (.1D) format into
                      file 'ff' (1 row per sub-brick in source).
               *N.B.: A historical synonym for this option is '-1Dfile'.

 -1Dparam_apply aa  = Read warp parameters from file 'aa', apply them to 
                      the source dataset, and produce a new dataset.
                      (Must also use the '-prefix' option for this to work!  )
                      (In this mode of operation, there is no optimization of)
                      (the cost functional by changing the warp parameters;  )
                      (previously computed parameters are applied directly.  )
               *N.B.: A historical synonym for this is '-1Dapply'.
               *N.B.: If you use -1Dparam_apply, you may also want to use
                       -master to control the grid on which the new
                       dataset is written -- the base dataset from the
                       original 3dAllineate run would be a good possibility.
                       Otherwise, the new dataset will be written out on the
                       3D grid coverage of the source dataset, and this
                       might result in clipping off part of the image.
               *N.B.: Each row in the 'aa' file contains the parameters for
                       transforming one sub-brick in the source dataset.
                       If there are more sub-bricks in the source dataset
                       than there are rows in the 'aa' file, then the last
                       row is used repeatedly.
               *N.B.: A trick to use 3dAllineate to resample a dataset to
                       a finer grid spacing:
                         3dAllineate -input dataset+orig         \
                                     -master template+orig       \
                                     -prefix newdataset          \
                                     -final quintic              \
                                     -1Dparam_apply '1D: 12@0'\'  
                       Here, the identity transformation is specified
                       by giving all 12 affine parameters as 0 (note
                       the extra \' at the end of the '1D: 12@0' input!).

 -1Dmatrix_save ff  = Save the transformation matrix for each sub-brick into
                      file 'ff' (1 row per sub-brick in the source dataset).
                      If 'ff' does NOT end in '.1D', then the program will
                      append '.aff12.1D' to 'ff' to make the output filename.
               *N.B.: This matrix is the coordinate transformation from base
                       to source DICOM coordinates. In other terms:
                          Xin = Xsource = M Xout = M Xbase
                                   or
                          Xout = Xbase = inv(M) Xin = inv(M) Xsource
                       where Xin or Xsource is the 4x1 coordinates of a
                       location in the input volume. Xout is the 
                       coordinate of that same location in the output volume.
                       Xbase is the coordinate of the corresponding location
                       in the base dataset. M is ff augmented by a 4th row of
                       [0 0 0 1], X. is an augmented column vector [x,y,z,1]'
                       To get the inverse matrix inv(M)
                       (source to base), use the cat_matvec program, as in
                         cat_matvec fred.aff12.1D -I

 -1Dmatrix_apply aa = Use the matrices in file 'aa' to define the spatial
                      transformations to be applied.  Also see program
                      cat_matvec for ways to manipulate these matrix files.
               *N.B.: You probably want to use either -base or -master
                      with either *_apply option, so that the coordinate
                      system that the matrix refers to is correctly loaded.

  * The -1Dmatrix_* options can be used to save and re-use the transformation *
  * matrices.  In combination with the program cat_matvec, which can multiply *
  * saved transformation matrices, you can also adjust these matrices to      *
  * other alignments.                                                         *

  * The script 'align_epi_anat.py' uses 3dAllineate and 3dvolreg to align EPI *
  * datasets to T1-weighted anatomical datasets, using saved matrices between *
  * the two programs.  This script is our currently recommended method for    *
  * doing such intra-subject alignments.                                      *

 -cost ccc   = Defines the 'cost' function that defines the matching
               between the source and the base; 'ccc' is one of
                ls   *OR*  leastsq         = Least Squares [Pearson Correlation]
                mi   *OR*  mutualinfo      = Mutual Information [H(b)+H(s)-H(b,s)]
                crM  *OR*  corratio_mul    = Correlation Ratio (Symmetrized*)
                nmi  *OR*  norm_mutualinfo = Normalized MI [H(b,s)/(H(b)+H(s))]
                hel  *OR*  hellinger       = Hellinger metric
                crA  *OR*  corratio_add    = Correlation Ratio (Symmetrized+)
                crU  *OR*  corratio_uns    = Correlation Ratio (Unsym)
               You can also specify the cost functional using an option
               of the form '-mi' rather than '-cost mi', if you like
               to keep things terse and cryptic (as I do).
               [Default == '-hel' (for no good reason).]

 -interp iii = Defines interpolation method to use during matching
               process, where 'iii' is one of
                 NN      *OR* nearestneighbour *OR nearestneighbor
                 linear  *OR* trilinear
                 cubic   *OR* tricubic
                 quintic *OR* triquintic
               Using '-NN' instead of '-interp NN' is allowed (e.g.).
               Note that using cubic or quintic interpolation during
               the matching process will slow the program down a lot.
               Use '-final' to affect the interpolation method used
               to produce the output dataset, once the final registration
               parameters are determined.  [Default method == 'linear'.]
            ** N.B.: Linear interpolation is used during the coarse
                     alignment pass; the selection here only affects
                     the interpolation method used during the second
                     (fine) alignment pass.

 -final iii  = Defines the interpolation mode used to create the
               output dataset.  [Default == 'cubic']
            ** N.B.: For '-final' ONLY, you can use 'wsinc5' to specify
                       that the final interpolation be done using a
                       weighted sinc interpolation method.  This method
                       is so SLOW that you aren't allowed to use it for
                       the registration itself.
                  ++ wsinc5 interpolation is highly accurate and should
                       reduce the smoothing artifacts from lower
                       order interpolation methods (which are most
                       visible if you interpolate an EPI time series
                       to high resolution and then make an image of
                       the voxel-wise variance).
                  ++ On my Intel-based Mac, it takes about 2.5 s to do
                       wsinc5 interpolation, per 1 million voxels output.
                       For comparison, quintic interpolation takes about
                       0.3 s per 1 million voxels: 8 times faster than wsinc5.
                  ++ The '5' refers to the width of the sinc interpolation
                       weights: plus/minus 5 grid points in each direction
                       (this is a tensor product interpolation, for speed).

TECHNICAL OPTIONS (used for fine control of the program):
=================
 -nmatch nnn = Use at most 'nnn' scattered points to match the
               datasets.  The smaller nnn is, the faster the matching
               algorithm will run; however, accuracy may be bad if
               nnn is too small.  If you end the 'nnn' value with the
               '%' character, then that percentage of the base's
               voxels will be used.
               [Default == 47% of voxels in the weight mask]

 -nopad      = Do not use zero-padding on the base image.
               [Default == zero-pad, if needed; -verb shows how much]

 -conv mmm   = Convergence test is set to 'mmm' millimeters.
               This doesn't mean that the results will be accurate
               to 'mmm' millimeters!  It just means that the program
               stops trying to improve the alignment when the optimizer
               (NEWUOA) reports it has narrowed the search radius
               down to this level.  [Default == 0.05 mm]

 -verb       = Print out verbose progress reports.
               [Using '-VERB' will give even more prolix reports.]
 -quiet      = Don't print out verbose stuff.
 -usetemp    = Write intermediate stuff to disk, to economize on RAM.
               Using this will slow the program down, but may make it
               possible to register datasets that need lots of space.
       **N.B.: Temporary files are written to the directory given
               in environment variable TMPDIR, or in /tmp, or in ./
               (preference in that order).  If the program crashes,
               these files are named TIM_somethingrandom, and you
               may have to delete them manually. (TIM=Temporary IMage)
       **N.B.: If the program fails with a 'malloc failure' type of
               message, then try '-usetemp' (malloc=memory allocator).
       **N.B.: If you use '-verb', then memory usage is printed out
               at various points along the way.
 -nousetemp  = Don't use temporary workspace on disk [the default].

 -check kkk  = After cost functional optimization is done, start at the
               final parameters and RE-optimize using the new cost
               function 'kkk'.  If the results are too different, a
               warning message will be printed.  However, the final
               parameters from the original optimization will be
               used to create the output dataset. Using '-check'
               increases the CPU time, but can help you feel sure
               that the alignment process did not go wild and crazy.
               [Default == no check == don't worry, be happy!]
       **N.B.: You can put more than one function after '-check', as in
                 -nmi -check mi hel crU crM
               to register with Normalized Mutual Information, and
               then check the results against 4 other cost functionals.
       **N.B.: On the other hand, some cost functionals give better
               results than others for specific problems, and so
               a warning that 'mi' was significantly different than
               'hel' might not actually mean anything (e.g.).

 ** PARAMETERS THAT AFFECT THE COST OPTIMIZATION STRATEGY **
 -onepass    = Use only the refining pass -- do not try a coarse
               resolution pass first.  Useful if you know that only
               small amounts of image alignment are needed.
               [The default is to use both passes.]
 -twopass    = Use a two pass alignment strategy, first searching for
               a large rotation+shift and then refining the alignment.
               [Two passes are used by default for the first sub-brick]
               [in the source dataset, and then one pass for the others.]
               ['-twopass' will do two passes for ALL source sub-bricks.]
 -twoblur rr = Set the blurring radius for the first pass to 'rr'
               millimeters.  [Default == 11 mm]
       **N.B.: You may want to change this from the default if
               your voxels are unusually small or unusually large
               (e.g., outside the range 1-4 mm on each axis).
 -twofirst   = Use -twopass on the first image to be registered, and
               then on all subsequent images from the source dataset,
               use results from the first image's coarse pass to start
               the fine pass.
               (Useful when there may be large motions between the   )
               (source and the base, but only small motions within   )
               (the source dataset itself; since the coarse pass can )
               (be slow, doing it only once makes sense in this case.)
       **N.B.: [-twofirst is on by default; '-twopass' turns it off.]
 -twobest bb = In the coarse pass, use the best 'bb' set of initial
               points to search for the starting point for the fine
               pass.  If bb==0, then no search is made for the best
               starting point, and the identity transformation is
               used as the starting point.  [Default=4; min=0 max=7]
       **N.B.: Setting bb=0 will make things run faster, but less reliably.
 -fineblur x = Set the blurring radius to use in the fine resolution
               pass to 'x' mm.  A small amount (1-2 mm?) of blurring at
               the fine step may help with convergence, if there is
               some problem.  [Default == 0 mm]
   **NOTES ON
   **STRATEGY: * If you expect only small-ish (< 2 voxels?) image movement,
                 then using '-onepass' or '-twobest 0' makes sense.
               * If you expect large-ish image movements, then do not
                 use '-onepass' or '-twobest 0'; the purpose of the
                 '-twobest' parameter is to search for large initial
                 rotations/shifts with which to start the coarse
                 optimization round.
               * If you have multiple sub-bricks in the source dataset,
                 then the default '-twofirst' makes sense if you don't expect
                 large movements WITHIN the source, but expect large motions
                 between the source and base.

 -cmass        = Use the center-of-mass calculation to bracket the shifts.
                   [This option is OFF by default]
                 If given in the form '-cmass+xy' (for example), means to
                 do the CoM calculation in the x- and y-directions, but
                 not the z-direction.
 -nocmass      = Don't use the center-of-mass calculation. [The default]
                  (You would not want to use the C-o-M calculation if the  )
                  (source sub-bricks have very different spatial locations,)
                  (since the source C-o-M is calculated from all sub-bricks)
 **EXAMPLE: You have a limited coverage set of axial EPI slices you want to
            register into a larger head volume (after 3dSkullStrip, of course).
            In this case, '-cmass+xy' makes sense, allowing CoM adjustment
            along the x = R-L and y = A-P directions, but not along the
            z = I-S direction, since the EPI doesn't cover the whole brain
            along that axis.

 -autoweight = Compute a weight function using the 3dAutomask
               algorithm plus some blurring of the base image.
       **N.B.: '-autoweight+100' means to zero out all voxels
                 with values below 100 before computing the weight.
               '-autoweight**1.5' means to compute the autoweight
                 and then raise it to the 1.5-th power (e.g., to
                 increase the weight of high-intensity regions).
               These two processing steps can be combined, as in
                 '-autoweight+100**1.5'
       **N.B.: Some cost functionals do not allow -autoweight, and
               will use -automask instead.  A warning message
               will be printed if you run into this situation.
               If a clip level '+xxx' is appended to '-autoweight',
               then the conversion into '-automask' will NOT happen.
               Thus, using a small positive '+xxx' can be used trick
               -autoweight into working on any cost functional.
 -automask   = Compute a mask function, which is like -autoweight,
               but the weight for a voxel is set to either 0 or 1.
       **N.B.: '-automask+3' means to compute the mask function, and
               then dilate it outwards by 3 voxels (e.g.).
 -autobox    = Expand the -automask function to enclose a rectangular
               box that holds the irregular mask.
       **N.B.: This is the default mode of operation!
               For intra-modality registration, '-autoweight' may be better!
             * If the cost functional is 'ls', then '-autoweight' will be
               the default, instead of '-autobox'.
 -nomask     = Don't compute the autoweight/mask; if -weight is not
               also used, then every voxel will be counted equally.
 -weight www = Set the weighting for each voxel in the base dataset;
               larger weights mean that voxel counts more in the cost
               function.
       **N.B.: The weight dataset must be defined on the same grid as
               the base dataset.
       **N.B.: Even if a method does not allow -autoweight, you CAN
               use a weight dataset that is not 0/1 valued.  The
               risk is yours, of course (!*! as always in AFNI !*!).
 -wtprefix p = Write the weight volume to disk as a dataset with
               prefix name 'p'.  Used with '-autoweight/mask', this option
               lets you see what voxels were important in the algorithm.

    Method  Allows -autoweight
    ------  ------------------
     ls     YES
     mi     NO
     crM    YES
     nmi    NO
     hel    NO
     crA    YES
     crU    YES

 -source_mask sss = Mask the source (input) dataset, using 'sss'.
 -source_automask = Automatically mask the source dataset.
                      [By default, all voxels in the source]
                      [dataset are used in the matching.   ]
            **N.B.: You can also use '-source_automask+3' to dilate
                    the default source automask outward by 3 voxels.

 -warp xxx   = Set the warp type to 'xxx', which is one of
                 shift_only         *OR* sho =  3 parameters
                 shift_rotate       *OR* shr =  6 parameters
                 shift_rotate_scale *OR* srs =  9 parameters
                 affine_general     *OR* aff = 12 parameters
               [Default = affine_general, which includes image]
               [      shifts, rotations, scaling, and shearing]

 -warpfreeze = Freeze the non-rigid body parameters (those past #6)
               after doing the first sub-brick.  Subsequent volumes
               will have the same spatial distortions as sub-brick #0,
               plus rigid body motions only.

 -replacebase   = If the source has more than one sub-brick, and this
                  option is turned on, then after the #0 sub-brick is
                  aligned to the base, the aligned #0 sub-brick is used
                  as the base image for subsequent source sub-bricks.

 -replacemeth m = After sub-brick #0 is aligned, switch to method 'm'
                  for later sub-bricks.  For use with '-replacebase'.

 -EPI        = Treat the source dataset as being composed of warped
               EPI slices, and the base as comprising anatomically
               'true' images.  Only phase-encoding direction image
               shearing and scaling will be allowed with this option.
       **N.B.: For most people, the base dataset will be a 3dSkullStrip-ed
               T1-weighted anatomy (MPRAGE or SPGR).  If you don't remove
               the skull first, the EPI images (which have little skull
               visible due to fat-suppression) might expand to fit EPI
               brain over T1-weighted skull.
       **N.B.: Usually, EPI datasets don't have as complete slice coverage
               of the brain as do T1-weighted datasets.  If you don't use
               some option (like '-EPI') to suppress scaling in the slice-
               direction, the EPI dataset is likely to stretch the slice
               thicknesss to better 'match' the T1-weighted brain coverage.
       **N.B.: '-EPI' turns on '-warpfreeze -replacebase'.
               You can use '-nowarpfreeze' and/or '-noreplacebase' AFTER the
               '-EPI' on the command line if you do not want these options used.

 -parfix n v   = Fix parameter #n to be exactly at value 'v'.
 -parang n b t = Allow parameter #n to range only between 'b' and 't'.
                 If not given, default ranges are used.
 -parini n v   = Initialize parameter #n to value 'v', but then
                 allow the algorithm to adjust it.
         **N.B.: Multiple '-par...' options can be used, to constrain
                 multiple parameters.
         **N.B.: -parini has no effect if -twopass is used, since
                 the -twopass algorithm carries out its own search
                 for initial parameters.

 -maxrot dd    = Allow maximum rotation of 'dd' degrees.  Equivalent
                 to '-parang 4 -dd dd -parang 5 -dd dd -parang 6 -dd dd'
                 [Default=30 degrees]
 -maxshf dd    = Allow maximum shift of 'dd' millimeters.  Equivalent
                 to '-parang 1 -dd dd -parang 2 -dd dd -parang 3 -dd dd'
                 [Default=33% of the size of the base image]
         **N.B.: This max shift setting is relative to the center-of-mass
                 shift, if the '-cmass' option is used.
 -maxscl dd    = Allow maximum scaling factor to be 'dd'.  Equivalent
                 to '-parang 7 1/dd dd -parang 8 1/dd dd -paran2 9 1/dd dd'
                 [Default=1.2=image can go up or down 20% in size]

 -master mmm = Write the output dataset on the same grid as dataset
               'mmm'.  If this option is NOT given, the base dataset
               is the master.
       **N.B.: 3dAllineate transforms the source dataset to be 'similar'
               to the base image.  Therefore, the coordinate system
               of the master dataset is interpreted as being in the
               reference system of the base image.  It is thus vital
               that these finite 3D volumes overlap, or you will lose data!
       **N.B.: If 'mmm' is the string 'SOURCE', then the source dataset
               is used as the master for the output dataset grid.
               You can also use 'BASE', which is of course the default.

 -mast_dxyz del = Write the output dataset using grid spacings of
  *OR*            'del' mm.  If this option is NOT given, then the
 -newgrid del     grid spacings in the master dataset will be used.
                  This option is useful when registering low resolution
                  data (e.g., EPI time series) to high resolution
                  datasets (e.g., MPRAGE) where you don't want to
                  consume vast amounts of disk space interpolating
                  the low resolution data to some artificially fine
                  (and meaningless) spatial grid.

----------------------------------------------
DEFINITION OF AFFINE TRANSFORMATION PARAMETERS
----------------------------------------------
The 3x3 spatial transformation matrix is calculated as [S][D][U],
where [S] is the shear matrix,
      [D] is the scaling matrix, and
      [U] is the rotation (proper orthogonal) matrix.
Thes matrices are specified in DICOM-ordered (x=-R+L,y=-A+P,z=-I+S)
coordinates as:

  [U] = [Rotate_y(param#6)] [Rotate_x(param#5)] [Rotate_z(param #4)]
        (angles are in degrees)

  [D] = diag( param#7 , param#8 , param#9 )

        [    1        0     0 ]        [ 1 param#10 param#11 ]
  [S] = [ param#10    1     0 ]   OR   [ 0    1     param#12 ]
        [ param#11 param#12 1 ]        [ 0    0        1     ]

The shift vector comprises parameters #1, #2, and #3.

The goal of the program is to find the warp parameters such that
   I([x]_warped) 'is similar to' J([x]_in)
as closely as possible in some sense of 'similar', where J(x) is the
base image, and I(x) is the source image.

Using '-parfix', you can specify that some of these parameters
are fixed.  For example, '-shift_rotate_scale' is equivalent
'-affine_general -parfix 10 0 -parfix 11 0 -parfix 12 0'.
Don't even think of using the '-parfix' option unless you grok
this example!

----------- Special Note for the '-EPI' Option's Coordinates -----------
In this case, the parameters above are with reference to coordinates
  x = frequency encoding direction (by default, first axis of dataset)
  y = phase encoding direction     (by default, second axis of dataset)
  z = slice encoding direction     (by default, third axis of dataset)
This option lets you freeze some of the warping parameters in ways that
make physical sense, considering how echo-planar images are acquired.
The x- and z-scaling parameters are disabled, and shears will only affect
the y-axis.  Thus, there will be only 9 free parameters when '-EPI' is
used.  If desired, you can use a '-parang' option to allow the scaling
fixed parameters to vary (put these after the '-EPI' option):
  -parang 7 0.833 1.20     to allow x-scaling
  -parang 9 0.833 1.20     to allow z-scaling
You could also fix some of the other parameters, if that makes sense
in your situation; for example, to disable out of slice rotations:
  -parfix 5 0  -parfix 6 0
and to disable out of slice translation:
  -parfix 3 0
NOTE WELL: If you use '-EPI', then the output warp parameters (e.g., in
           '-1Dparam_save') apply to the (freq,phase,slice) xyz coordinates,
           NOT to the DICOM xyz coordinates, so equivalent transformations
           will be expressed with different sets of parameters entirely
           than if you don't use '-EPI'!  This comment does NOT apply
           to the output of '-1Dmatrix_save', since that matrix is
           defined relative to the RAI (DICOM) spatial coordinates.

*********** CHANGING THE ORDER OF MATRIX APPLICATION ***********

  -SDU or -SUD }= Set the order of the matrix multiplication
  -DSU or -DUS }= for the affine transformations:
  -USD or -UDS }=   S = triangular shear (params #10-12)
                    D = diagonal scaling matrix (params #7-9)
                    U = rotation matrix (params #4-6)
                  Default order is '-SDU', which means that
                  the U matrix is applied first, then the
                  D matrix, then the S matrix.

  -Supper      }= Set the S matrix to be upper or lower
  -Slower      }= triangular [Default=lower triangular]

  -ashift OR   }= Apply the shift parameters (#1-3) after OR
  -bshift      }= before the matrix transformation. [Default=after]

            ==================================================
        ===== RWCox - September 2006 - Live Long and Prosper =====
            ==================================================

         ********************************************************
        *** From Webster's Dictionary: Allineate == 'to align' ***
         ********************************************************

===========================================================================
                TOP SECRET HIDDEN OPTIONS (-HELP or -POMOC)
---------------------------------------------------------------------------
                ** N.B.: Most of these are experimental! **
===========================================================================

 -num_rtb n  = At the beginning of the fine pass, the best set of results
               from the coarse pass are 'refined' a little by further
               optimization, before the single best one is chosen for
               for the final fine optimization.
              * This option sets the maximum number of cost functional
                evaluations to be used (for each set of parameters)
                in this step.
              * The default is 99; a larger value will take more CPU
                time but may give more robust results.
              * If you want to skip this step entirely, use '-num_rtb 0'.
                then, the best of the coarse pass results is taken
                straight to the final optimization passes.
       **N.B.: If you use '-VERB', you will see that one extra case
               is involved in this initial fine refinement step; that
               case is starting with the identity transformation, which
               helps insure against the chance that the coarse pass
               optimizations ran totally amok.
 -nocast     = By default, parameter vectors that are too close to the
               best one are cast out at the end of the coarse pass
               refinement process. Use this option if you want to keep
               them all for the fine resolution pass.
 -norefinal  = Do NOT re-start the fine iteration step after it
               has converged.  The default is to re-start it, which
               usually results in a small improvement to the result
               (at the cost of CPU time).  This re-start step is an
               an attempt to avoid a local minimum trap.  It is usually
               not necessary, but sometimes helps.

 -savehist sss = Save start and final 2D histograms as PGM
                 files, with prefix 'sss' (cost: cr mi nmi hel).
                * if filename contains 'FF', floats is written
                * these are the weighted histograms!
                * -savehist will also save histogram files when
                  the -allcost evaluations takes place
                * this option is mostly useless unless '-histbin' is
                  also used
 -median       = Smooth with median filter instead of Gaussian blur.
                 (Somewhat slower, and not obviously useful.)
 -powell m a   = Set the Powell NEWUOA dimensional parameters to
                 'm' and 'a' (cf. source code in powell_int.c).
                 The number of points used for approximating the
                 cost functional is m*N+a, where N is the number
                 of parameters being optimized.  The default values
                 are m=2 and a=3.  Larger values will probably slow
                 the program down for no good reason.
 -target ttt   = Same as '-source ttt'.  In the earliest versions,
                 what I now call the 'source' dataset was called the
                 'target' dataset:
                    Try to remember the kind of September (2006)
                    When life was slow and oh so mellow
                    Try to remember the kind of September
                    When grass was green and source was target.
 -Xwarp       =} Change the warp/matrix setup so that only the x-, y-, or z-
 -Ywarp       =} axis is stretched & sheared.  Useful for EPI, where 'X',
 -Zwarp       =} 'Y', or 'Z' corresponds to the phase encoding direction.
 -FPS fps      = Generalizes -EPI to arbitrary permutation of directions.
 -histpow pp   = By default, the number of bins in the histogram used
                 for calculating the Hellinger, Mutual Information, and
                 Correlation Ratio statistics is n^(1/3), where n is
                 the number of data points.  You can change that exponent
                 to 'pp' with this option.
 -histbin nn   = Or you can just set the number of bins directly to 'nn'.
 -eqbin   nn   = Use equalized marginal histograms with 'nn' bins.
 -clbin   nn   = Use 'nn' equal-spaced bins except for the bot and top,
                 which will be clipped (thus the 'cl').  If nn is 0, the
                 program will pick the number of bins for you.
                 **N.B.: '-clbin 0' is now the default [25 Jul 2007];
                         if you want the old all-equal-spaced bins, use
                         '-histbin 0'.
                 **N.B.: '-clbin' only works when the datasets are
                         non-negative; any negative voxels in either
                         the input or source volumes will force a switch
                         to all equal-spaced bins.
 -wtmrad  mm   = Set autoweight/mask median filter radius to 'mm' voxels.
 -wtgrad  gg   = Set autoweight/mask Gaussian filter radius to 'gg' voxels.
 -nmsetup nn   = Use 'nn' points for the setup matching [default=98756]
 -ignout       = Ignore voxels outside the warped source dataset.

 -blok bbb     = Blok definition for the 'lp?' (Local Pearson) cost
                 functions: 'bbb' is one of
                   'BALL(r)' or 'CUBE(r)' or 'RHDD(r)' or 'TOHD(r)'
                 corresponding to
                   spheres or cubes or rhombic dodecahedra or
                   truncated octahedra
                 where 'r' is the size parameter in mm.
                 [Default is 'RHDD(6.54321)' (rhombic dodecahedron)]

 -allcost        = Compute ALL available cost functionals and print them
                   at various points.
 -allcostX       = Compute and print ALL available cost functionals for the
                   un-warped inputs, and then quit.
 -allcostX1D p q = Compute ALL available cost functionals for the set of
                   parameters given in the 1D file 'p' (12 values per row),
                   write them to the 1D file 'q', then exit. (For you, Zman)
                  * N.B.: If -fineblur is used, that amount of smoothing
                          will be applied prior to the -allcostX evaluations.

===========================================================================

 Hidden experimental cost functionals:
   sp   *OR*  spearman        = Spearman [rank] Correlation
   je   *OR*  jointentropy    = Joint Entropy [H(b,s)]
   lss  *OR*  signedPcor      = Signed Pearson Correlation
   lpc  *OR*  localPcorSigned = Local Pearson Correlation Signed
   lpa  *OR*  localPcorAbs    = Local Pearson Correlation Abs
   ncd  *OR*  NormCompDist    = Normalized Compression Distance

 Cost functional descriptions (for use with -allcost output):
   ls  :: 1 - abs(Pearson correlation coefficient)
   sp  :: 1 - abs(Spearman correlation coefficient)
   mi  :: - Mutual Information = H(base,source)-H(base)-H(source)
   crM :: 1 - abs[ CR(base,source) * CR(source,base) ]
   nmi :: 1/Normalized MI = H(base,source)/[H(base)+H(source)]
   je  :: H(base,source) = joint entropy of image pair
   hel :: - Hellinger distance(base,source)
   crA :: 1 - abs[ CR(base,source) + CR(source,base) ]
   crU :: CR(source,base) = Var(source|base) / Var(source)
   lss :: Pearson correlation coefficient between image pair
   lpc :: nonlinear average of Pearson cc over local neighborhoods
   lpa :: 1 - abs(lpc)
   ncd :: mutual compressibility (via zlib) -- doesn't work yet

 * N.B.: Some cost functional values (as printed out herein)
   are negated (e.g., 'hel', 'mi'), so that the best image
   alignment will be found when the cost is minimized.  See
   the descriptions above and the references below for more
   details for each functional.

 * For more information about the 'lpc' functional, see
     ZS Saad, DR Glen, G Chen, MS Beauchamp, R Desai, RW Cox.
       A new method for improving functional-to-structural
       MRI alignment using local Pearson correlation.
       NeuroImage 44: 839-848, 2009.
     http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
     http://afni.nimh.nih.gov/sscc/rwcox/papers/LocalPearson2009.pdf
   The '-blok' option can be used to control the regions
   (size and shape) used to compute the local correlations.

 * For more information about the 'cr' functionals, see
     http://en.wikipedia.org/wiki/Correlation_ratio
   Note that CR(x,y) is not the same as CR(y,x), which
   is why there are symmetrized versions of it available.

 * For more information about the 'mi', 'nmi', and 'je'
   cost functionals, see
     http://en.wikipedia.org/wiki/Mutual_information
     http://en.wikipedia.org/wiki/Joint_entropy
     http://www.cs.jhu.edu/~cis/cista/746/papers/mutual_info_survey.pdf

 * For more information about the 'hel' functional, see
     http://en.wikipedia.org/wiki/Hellinger_distance

 * Some cost functionals (e.g., 'mi', 'cr', 'hel') are
   computed by creating a 2D joint histogram of the
   base and source image pair.  Various options above
   (e.g., '-histbin', etc.) can be used to control the
   number of bins used in the histogram on each axis.
   (If you care to control the program in such detail!)

 * Minimization of the chosen cost functional is done via
   the NEWUOA software, described in detail in
     MJD Powell. 'The NEWUOA software for unconstrained
       optimization without derivatives.' In: GD Pillo,
       M Roma (Eds), Large-Scale Nonlinear Optimization.
       Springer, 2006.
     http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2004_08.pdf

===========================================================================

 -nwarp type = Experimental nonlinear warp:
              * At present, the only 'type' is 'bilinear',
                as in 3dWarpDrive, with 39 parameters.
              * I plan to implement more complicated nonlinear
                warps in the future, someday ....
              * -nwarp can only be applied to a source dataset
                that has a single sub-brick!
              * -1Dparam_save and -1Dparam_apply work with
                bilinear warps; see the Notes for more information.
-nwarp NOTES:
-------------
* -nwarp is slow!
* Check the results to make sure the optimizer didn't run amok!
   (You should always do this with any registration software.)
* If you use -1Dparam_save, then you can apply the bilinear
   warp to another dataset using -1Dparam_apply in a later
   3dAllineate run. To do so, use '-nwarp bilinear' in both
   runs, so that the program knows what the extra parameters
   in the file are to be used for.
  ++ 43 values are saved in 1 row of the param file.
  ++ The first 12 are the affine parameters
  ++ The next 27 are the D1,D2,D3 matrix parameters.
  ++ The final 'extra' 4 values are used to specify
      the center of coordinates (vector Xc below), and a
      pre-computed scaling factor applied to parameters #13..39.
* Bilinear warp formula:
   Xout = inv[ I + {D1 (Xin-Xc) | D2 (Xin-Xc) | D3 (Xin-Xc)} ] [ A Xin ]
  where Xin  = input vector  (base dataset coordinates)
        Xout = output vector (source dataset coordinates)
        Xc   = center of coordinates used for nonlinearity
               (will be the center of the base dataset volume)
        A    = matrix representing affine transformation (12 params)
        I    = 3x3 identity matrix
    D1,D2,D3 = three 3x3 matrices (the 27 'new' parameters)
               * when all 27 parameters == 0, warp is purely affine
     {P|Q|R} = 3x3 matrix formed by adjoining the 3-vectors P,Q,R
    inv[...] = inverse 3x3 matrix of stuff inside '[...]'
* The inverse of a bilinear transformation is another bilinear
   transformation.  Someday, I may write a program that will let
   you compute that inverse transformation, so you can use it for
   some cunning and devious purpose.
* If you expand the inv[...] part of the above formula in a 1st
   order Taylor series, you'll see that a bilinear warp is basically
   a quadratic warp, with the additional feature that its inverse
   is directly computable (unlike a pure quadratic warp).
* Is '-nwarp bilinear' useful?  Try it and tell me!

===========================================================================

++ Compile date = Mar 13 2009




AFNI program: 3dAnatNudge
Usage: 3dAnatNudge [options]
Moves the anat dataset around to best overlap the epi dataset.

OPTIONS:
 -anat aaa   = aaa is an 'scalped' (3dIntracranial) high-resolution
                anatomical dataset [a mandatory option]
 -epi eee    = eee is an EPI dataset [a mandatory option]
                The first [0] sub-brick from each dataset is used,
                unless otherwise specified on the command line.
 -prefix ppp = ppp is the prefix of the output dataset;
                this dataset will differ from the input only
                in its name and its xyz-axes origin
                [default=don't write new dataset]
 -step sss   = set the step size to be sss times the voxel size
                in the anat dataset [default=1.0]
 -x nx       = search plus and minus nx steps along the EPI
 -y ny          dataset's x-axis; similarly for ny and the
 -z nz          y-axis, and for nz and the z-axis
                [default: nx=1 ny=5 nz=0]
 -verb       = print progress reports (this is a slow program)

NOTES
*Systematically moves the anat dataset around and find the shift
  that maximizes overlap between the anat dataset and the EPI
  dataset.  No rotations are done.
*Note that if you use -prefix, a new dataset will be created that
  is a copy of the anat, except that it's origin will be shifted
  and it will have a different ID code than the anat.  If you want
  to use this new dataset as the anatomy parent for the EPI
  datasets, you'll have to use
    3drefit -apar ppp+orig eee1+orig eee2+orig ...
*If no new dataset is written (no -prefix option), then you
  can use the 3drefit command emitted at the end to modify
  the origin of the anat dataset.  (Assuming you trust the
  results - visual inspection is recommended!)
*The reason the default search grid is mostly along the EPI y-axis
  is that axis is usually the phase-encoding direction, which is
  most subject to displacement due to off-resonance effects.
*Note that the time this program takes will be proportional to
  (2*nx+1)*(2*ny+1)*(2*nz+1), so using a very large search grid
  will result in a very large usage of CPU time.
*Recommended usage:
 + Make a 1-brick function volume from a typical EPI dataset:
     3dbucket -fbuc -prefix epi_fb epi+orig
 + Use 3dIntracranial to scalp a T1-weighted volume:
     3dIntracranial -anat spgr+orig -prefix spgr_st
 + Use 3dAnatNudge to produce a shifted anat dataset
     3dAnatNudge -anat spgr_st+orig -epi epi_fb+orig -prefix spgr_nudge
 + Start AFNI and look at epi_fb overlaid in color on the
    anat datasets spgr_st+orig and spgr_nudge+orig, to see if the
    nudged dataset seems like a better fit.
 + Delete the nudged dataset spgr_nudge.
 + If the nudged dataset DOES look better, then apply the
    3drefit command output by 3dAnatNudge to spgr+orig.
*Note that the x-, y-, and z-axes for the epi and anat datasets
  may point in different directions (e.g., axial SPGR and
  coronal EPI).  The 3drefit command applies to the anat
  dataset, NOT to the EPI dataset.
*If the program runs successfully, the only thing set to stdout
  will be the 3drefit command string; all other messages go to
  stderr.  This can be useful if you want to capture the command
  to a shell variable and then execute it, as in the following
  csh fragment:
     set cvar = `3dAnatNudge ...`
     if( $cvar[1] == "3drefit" ) $cvar
  The test on the first sub-string in cvar allows for the
  possibility that the program fails, or that the optimal
  nudge is zero.

++ Compile date = Mar 13 2009




AFNI program: 3dAnhist
Usage: 3dAnhist [options] dataset
Input dataset is a T1-weighted high-res of the brain (shorts only).
Output is a list of peaks in the histogram, to stdout, in the form
  ( datasetname #peaks peak1 peak2 ... )
In the C-shell, for example, you could do
  set anhist = `3dAnhist -q -w1 dset+orig`
Then the number of peaks found is in the shell variable $anhist[2].

Options:
  -q  = be quiet (don't print progress reports)
  -h  = dump histogram data to Anhist.1D and plot to Anhist.ps
  -F  = DON'T fit histogram with stupid curves.
  -w  = apply a Winsorizing filter prior to histogram scan
         (or -w7 to Winsorize 7 times, etc.)
  -2  = Analyze top 2 peaks only, for overlap etc.

  -label xxx = Use 'xxx' for a label on the Anhist.ps plot file
                instead of the input dataset filename.
  -fname fff = Use 'fff' for the filename instead of 'Anhist'.

If the '-2' option is used, AND if 2 peaks are detected, AND if
the -h option is also given, then stdout will be of the form
  ( datasetname 2 peak1 peak2 thresh CER CJV count1 count2 count1/count2)
where 2      = number of peaks
      thresh = threshold between peak1 and peak2 for decision-making
      CER    = classification error rate of thresh
      CJV    = coefficient of joint variation
      count1 = area under fitted PDF for peak1
      count2 = area under fitted PDF for peak2
      count1/count2 = ratio of the above quantities
NOTA BENE
---------
* If the input is a T1-weighted MRI dataset (the usual case), then
   peak 1 should be the gray matter (GM) peak and peak 2 the white
   matter (WM) peak.
* For the definitions of CER and CJV, see the paper
   Method for Bias Field Correction of Brain T1-Weighted Magnetic
   Resonance Images Minimizing Segmentation Error
   JD Gispert, S Reig, J Pascau, JJ Vaquero, P Garcia-Barreno,
   and M Desco, Human Brain Mapping 22:133-144 (2004).
* Roughly speaking, CER is the ratio of the overlapping area of the
   2 peak fitted PDFs to the total area of the fitted PDFS.  CJV is
   (sigma_GM+sigma_WM)/(mean_WM-mean_GM), and is a different, ad hoc,
   measurement of how much the two PDF overlap.
* The fitted PDFs are NOT Gaussians.  They are of the form
   f(x) = b((x-p)/w,a), where p=location of peak, w=width, 'a' is
   a skewness parameter between -1 and 1; the basic distribution
   is defined by b(x)=(1-x^2)^2*(1+a*x*abs(x)) for -1 < x < 1.

-- RWCox - November 2004

++ Compile date = Mar 13 2009




AFNI program: 3dAttribute
Usage: 3dAttribute [options] aname dset
Prints (to stdout) the value of the attribute 'aname' from
the header of dataset 'dset'.  If the attribute doesn't exist,
prints nothing and sets the exit status to 1.

Options:
  -name = Include attribute name in printout
  -all  = Print all attributes [don't put aname on command line]
          Also implies '-name'.  Attributes print in whatever order
          they are in the .HEAD file, one per line.  You may want
          to do '3dAttribute -all elvis+orig | sort' to get them
          in alphabetical order.
  -center = Center of volume in RAI coordinates.
            Note that center is not itself an attribute in the 
           .HEAD file. It is calculated from other attributes.
  Special options for string attributes:
    -ssep SSEP    Use string SSEP as a separator between strings for
                  multiple sub-bricks. The default is '~', which is what
                  is used internally in AFNI's .HEAD file. For tcsh,
                  I recommend ' ' which makes parsing easy, assuming each
                  individual string contains no spaces to begin with.
                  Try -ssep 'NUM'
    -sprep SPREP  Use string SPREP to replace blank space in string 
                  attributes.
    -quote        Use single quote around each string.
    Examples:
       3dAttribute -quote -ssep ' '  BRICK_LABS SomeStatDset+tlrc.BRIK
       3dAttribute -quote -ssep 'NUM' -sprep '+' BRICK_LABS SomeStatDset+tlrc.BRIK

++ Compile date = Mar 13 2009




AFNI program: 3dAutoTcorrelate
Usage: 3dAutoTcorrelate [options] dset
Computes the correlation coefficient between each pair of
voxels in the input dataset, and stores the output into
a new anatomical bucket dataset.

Options:
  -pearson  = Correlation is the normal Pearson (product moment)
                correlation coefficient [default].
  -spearman = Correlation is the Spearman (rank) correlation
                coefficient.
  -quadrant = Correlation is the quadrant correlation coefficient.

  -polort m = Remove polynomical trend of order 'm', for m=-1..3.
                [default is m=1; removal is by least squares].
                Using m=-1 means no detrending; this is only useful
                for data/information that has been pre-processed.

  -autoclip = Clip off low-intensity regions in the dataset,
  -automask =  so that the correlation is only computed between
               high-intensity (presumably brain) voxels.  The
               intensity level is determined the same way that
               3dClipLevel works.

  -prefix p = Save output into dataset with prefix 'p'
               [default prefix is 'ATcorr'].

  -time     = Save output as a 3D+time dataset instead
               of a anat bucket.

Notes:
 * The output dataset is anatomical bucket type of shorts.
 * The output file might be gigantic and you might run out
    of memory running this program.  Use at your own risk!
 * The program prints out an estimate of its memory usage
    when it starts.  It also prints out a progress 'meter'
    of 1 dot per 10 output sub-bricks.
 * This is a quick hack for Peter Bandettini. Now pay up.

-- RWCox - Jan 31 2002

++ Compile date = Mar 13 2009




AFNI program: 3dAutobox
Usage: 3dAutobox [options] DATASET
Computes size of a box that fits around the volume.
Also can be used to crop the volume to that box.
Optional parameters are:
-prefix PREFIX: Crop the input dataset to the size of the box.
                If left empty no new volume is written out.
-input DATASET: An alternate way to specify the input dataset.
                The default method is to pass DATASET as
                the last parameter on the command line.
-noclust      : Don't do any clustering to find box. Any non-zero
                voxel will be preserved in cropped volume.
                The default uses some clustering to find cropping


++ Compile date = Mar 13 2009




AFNI program: 3dAutomask
Usage: 3dAutomask [options] dataset
Input dataset is EPI 3D+time, or a skull-stripped anatomical.
Output dataset is a brain-only mask dataset.
Method:
 + Uses 3dClipLevel algorithm to find clipping level.
 + Keeps only the largest connected component of the
   supra-threshold voxels, after an erosion/dilation step.
 + Writes result as a 'fim' type of functional dataset,
   which will be 1 inside the mask and 0 outside the mask.
Options:
  -prefix ppp = Write mask into dataset with prefix 'ppp'.
                 [Default == 'automask']
  -clfrac cc  = Set the 'clip level fraction' to 'cc', which
                 must be a number between 0.1 and 0.9.
                 A small 'cc' means to make the initial threshold
                 for clipping (a la 3dClipLevel) smaller, which
                 will tend to make the mask larger.  [default=0.5]
  -nograd     = The program uses a 'gradual' clip level by default.
                 To use a fixed clip level, use '-nograd'.
                 [Change to gradual clip level made 24 Oct 2006.]
  -peels pp   = Peel the mask 'pp' times, then unpeel.  Designed
                 to clip off protuberances less than 2*pp voxels
                 thick. [Default == 1]
  -nbhrs nn   = Define the number of neighbors needed for a voxel
                 NOT to be peeled.  The 18 nearest neighbors in
                 the 3D lattice are used, so 'nn' should be between
                 9 and 18.  [Default == 17]
  -q          = Don't write progress messages (i.e., be quiet).
  -eclip      = After creating the mask, remove exterior
                 voxels below the clip threshold.
  -dilate nd  = Dilate the mask outwards 'nd' times.
  -erode ne   = Erode the mask inwards 'ne' times.
  -SI hh      = After creating the mask, find the most superior
                 voxel, then zero out everything more than 'hh'
                 millimeters inferior to that.  hh=130 seems to
                 be decent (i.e., for Homo Sapiens brains).
--------------------------------------------------------------------
How to make an edge-of-brain mask:
* 3dSkullStrip to create a brain-only dataset; say, Astrip+orig
* 3dAutomask -prefix Amask Astrip+orig
* Create a mask of edge-only voxels via
   3dcalc -a Amask+orig -b a+i -c a-i -d a+j -e a-j -f a+k -g a-k \
          -expr 'ispositive(a)*amongst(0,b,c,d,e,f,g)' -prefix Aedge
  which will be 1 at all voxels in the brain mask that have a
  nearest neighbor that is NOT in the brain mask.
* cf. '3dcalc -help' DIFFERENTIAL SUBSCRIPTS for information
  on the 'a+i' et cetera inputs used above.
* In regions where the brain mask is 'stair-stepping', then the
  voxels buried inside the corner of the steps probably won't
  show up in this edge mask:
     ...00000000...
     ...aaa00000...
     ...bbbaa000...
     ...bbbbbaa0...
  Only the 'a' voxels are in this edge mask, and the 'b' voxels
  down in the corners won't show up, because they only touch a
  0 voxel on a corner, not face-on.  Depending on your use for
  the edge mask, this effect may or may not be a problem.
--------------------------------------------------------------------

++ Compile date = Mar 13 2009




AFNI program: 3dBRAIN_VOYAGERtoAFNI

Usage: 3dBRAIN_VOYAGERtoAFNI <-input BV_VOLUME.vmr> 
                             [-bs] [-qx] [-tlrc|-acpc|-orig] [<-prefix PREFIX>]
 
 Converts a BrainVoyager vmr dataset to AFNI's BRIK format
 The conversion is based on information from BrainVoyager's
 website: www.brainvoyager.com. 
 Sample data and information provided by 
  Adam Greenberg and Nikolaus Kriegeskorte.

  If you get error messages about the number of
 voxels and file size, try the options below.
 I hope to automate these options once I have
 a better description of the BrainVoyager QX format.

  Optional Parameters:
  -bs: Force byte swapping.
  -qx: .vmr file is from BrainVoyager QX
  -tlrc: dset in tlrc space
  -acpc: dset in acpc-aligned space
  -orig: dset in orig space
  If unspecified, the program attempts to guess the view from
  the name of the input.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: 3dBlurToFWHM
Usage: 3dBlurToFWHM [options]
Blurs a 'master' dataset until it reaches a specified FWHM
smoothness (approximately).  The same blurring schedule is
applied to the input dataset to produce the output.  The goal
is to make the output dataset have the given smoothness, no
matter what smoothness it had on input (however, the program
cannot 'unsmooth' a dataset!).  See below for the method used.

OPTIONS
-------
 -input      ddd = This required 'option' specifies the dataset
                   that will be smoothed and output.
 -blurmaster bbb = This option specifies the dataset whose
                   whose smoothness controls the process.
                  **N.B.: If not given, the input dataset is used.
                  **N.B.: This should be one continuous run.
                          Do not input catenated runs!
 -prefix     ppp = Prefix for output dataset will be 'ppp'.
                  **N.B.: Output dataset is always in float format.
 -mask       mmm = Mask dataset, if desired.  Blurring will
                   occur only within the mask.  Voxels NOT in
                   the mask will be set to zero in the output.
 -automask       = Create an automask from the input dataset.
                  **N.B.: Not useful if the input dataset has
                          been detrended before input!
 -FWHM       f   = Blur until the 3D FWHM is 'f'.
 -FWHMxy     f   = Blur until the 2D (x,y)-plane FWHM is 'f'.
                   No blurring is done along the z-axis.
                  **N.B.: Note that you can't REDUCE the smoothness
                          of a dataset.
                  **N.B.: Here, 'x', 'y', and 'z' refer to the
                          grid/slice order as stored in the dataset,
                          not DICOM ordered coordinates!
                  **N.B.: With -FWHMxy, smoothing is done only in the
                          dataset xy-plane.  With -FWHM, smoothing
                          is done in 3D.
                  **N.B.: The actual goal is reached when
                            -FHWM  :  cbrt(FWHMx*FWHMy*FWHMz) >= f
                            -FWHMxy:  sqrt(FWHMx*FWHMy)       >= f
                          That is, when the area or volume of a
                          'resolution element' goes past a threshold.
 -quiet            Shut up the verbose progress reports.
                  **N.B.: This should be the first option, to stifle
                          any verbosity from the option processing code.

FILE RECOMMENDATIONS for -blurmaster:
For FMRI statistical purposes, you DO NOT want the FWHM to reflect
  the spatial structure of the underlying anatomy.  Rather, you want
  the FWHM to reflect the spatial structure of the noise.  This means
  that the -blurmaster dataset should not have anatomical structure.  One
  good form of input is the output of '3dDeconvolve -errts', which is
  the residuals left over after the GLM fitted signal model is subtracted
  out from each voxel's time series.
You CAN give a multi-brick EPI dataset as the -blurmaster dataset; the
  dataset will be detrended in time (like the -detrend option in 3dFWHMx)
  which will tend to remove the spatial structure.  This makes it
  practicable to make the input and blurmaster datasets be the same,
  without having to create a detrended or residual dataset beforehand.
  Considering the accuracy of blurring estimates, this is probably good
  enough for government work [that is an insider's joke]. 
  N.B.: Do not use catenated runs as blurmasters. There 
  should be no discontinuities in the time axis of blurmaster.

ALSO SEE:
 * 3dFWHMx, which estimates smoothness globally
 * 3dLocalstat -stat FHWM, which estimates smoothness locally
 * This paper, which discusses the need for a fixed level of smoothness
   when combining FMRI datasets from different scanner platforms:
     Friedman L, Glover GH, Krenz D, Magnotta V; The FIRST BIRN. 
     Reducing inter-scanner variability of activation in a multicenter
     fMRI study: role of smoothness equalization.
     Neuroimage. 2006 Oct 1;32(4):1656-68.

METHOD:
The blurring is done by a conservative finite difference approximation
to the diffusion equation:
  du/dt = d/dx[ D_x(x,y,z) du/dx ] + d/dy[ D_y(x,y,z) du/dy ]
                                   + d/dz[ D_z(x,y,z) du/dz ]
        = div[ D(x,y,z) grad[u(x,y,z)] ]
where diffusion tensor D() is diagonal, Euler time-stepping is used, and
with Neumann (reflecting) boundary conditions at the edges of the mask
(which ensures that voxel data inside and outside the mask don't mix).
* At each pseudo-time step, the FWHM is estimated globally (like '3dFWHMx')
  and locally (like '3dLocalstat -stat FWHM'). Voxels where the local FWHM
  goes past the goal will not be smoothed any more (D gets set to zero).
* When the global smoothness estimate gets close to the goal, the blurring
  rate (pseudo-time step) will be reduced, to avoid over-smoothing.
* When an individual direction's smoothness (e.g., FWHMz) goes past the goal,
  all smoothing in that direction stops, but the other directions continue
  to be smoothed until the overall resolution element goal is achieved.
* When the global FWHM estimate reaches the goal, the program is done.
  It will also stop if progress stalls for some reason, or if the maximum
  iteration count is reached (infinite loops being unpopular).
* The output dataset will NOT have exactly the smoothness you ask for, but
  it will be close (fondly we do hope).  In our Imperial experiments, the
  results (measured via 3dFWHMx) are within 10% of the goal (usually better).
* 2D blurring via -FWHMxy may increase the smoothness in the z-direction
  reported by 3dFHWMx, even though there is no inter-slice processing.
  At this moment, I'm not sure why.  It may be an estimation artifact due
  to increased correlation in the xy-plane that biases the variance estimates
  used to calculate FWHMz.

ADVANCED OPTIONS:
 -maxite  ccc = Set maximum number of iterations to 'ccc' [Default=variable].
 -rate    rrr = The value of 'rrr' should be a number between
                0.05 and 3.5, inclusive.  It is a factor to change
                the overall blurring rate (slower for rrr < 1) and thus
                require more or less blurring steps.  This option should only
                be needed to slow down the program if the it over-smooths
                significantly (e.g., it overshoots the desired FWHM in
                Iteration #1 or #2).  You can increase the speed by using
                rrr > 1, but be careful and examine the output.
 -nbhd    nnn = As in 3dLocalstat, specifies the neighborhood
                used to compute local smoothness.
                [Default = 'SPHERE(-4)' in 3D, 'SPHERE(-6)' in 2D]
               ** N.B.: For the 2D -FWHMxy, a 'SPHERE()' nbhd
                        is really a circle in the xy-plane.
               ** N.B.: If you do NOT want to estimate local
                        smoothness, use '-nbhd NULL'.
 -bsave   bbb = Save the local smoothness estimates at each iteration
                with dataset prefix 'bbb' [for debugging purposes].
 -bmall       = Use all blurmaster sub-bricks.
                [Default: a subset will be chosen, for speed]
 -unif        = Uniformize the voxel-wise MAD in the blurmaster AND
                input datasets prior to blurring.  Will be restored
                in the output dataset.
 -detrend     = Detrend blurmaster dataset to order NT/30 before starting.
 -nodetrend   = Turn off detrending of blurmaster.
               ** N.B.: '-detrend' is the new default [05 Jun 2007]!
 -detin       = Also detrend input before blurring it, then retrend
                it afterwards. [Off by default]
 -temper      = Try harder to make the smoothness spatially uniform.

-- Author: The Dreaded Emperor Zhark - Nov 2006

++ Compile date = Mar 13 2009




AFNI program: 3dBrickStat
Usage: 3dBrickStat [options] dataset
Compute maximum and/or minimum voxel values of an input dataset

The output is a number to the console.  The input dataset
may use a sub-brick selection list, as in program 3dcalc.

Note: If you don't specify one sub-brick, the parameter you get
----- back is computed from all the sub-bricks in dataset.
Options :
  -quick = get the information from the header only (default)
  -slow = read the whole dataset to find the min and max values
         all other options except min and max imply slow
  -min = print the minimum value in dataset
  -max = print the minimum value in dataset (default)
  -mean = print the mean value in dataset 
  -var = print the variance in the dataset 
  -count = print the number of voxels included
  -volume = print the volume of voxels included in microliters
  -positive = include only positive voxel values 
  -negative = include only negative voxel values 
  -zero = include only zero voxel values 
  -non-positive = include only voxel values 0 or negative 
  -non-negative = include only voxel values 0 or greater 
  -non-zero = include only voxel values not equal to 0 
  -nan = include only voxel values that are not numbers (NaN, inf, -if,
       implies slow)
  -nonan =exclude voxel values that are not numbers
  -mask dset = use dset as mask to include/exclude voxels
  -automask = automatically compute mask for dataset
    Can not be combined with -mask
  -percentile p0 ps p1 write the percentile values starting
              at p0% and ending at p1% at a step of ps%
              Output is of the form p% value   p% value ...
              Percentile values are output first. Only one sub-brick
              is accepted as input with this option.
              Write the author if you REALLY need this option
              to work with multiple sub-bricks.
  -median a shortcut for -percentile 50 1 50
  -ver = print author and version info
  -help = print this help screen

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dCM
Usage: 3dCM [options] dset
Output = center of mass of dataset, to stdout.
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be averaged from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
  -automask    Generate the mask automatically.
  -set x y z   After computing the CM of the dataset, set the
                 origin fields in the header so that the CM
                 will be at (x,y,z) in DICOM coords.

++ Compile date = Mar 13 2009




AFNI program: 3dCRUISEtoAFNI

Usage: 3dCRUISEtoAFNI -input CRUISE_HEADER.dx
 Converts a CRUISE dataset defined by a heder in OpenDX format
 The conversion is based on sample data and information
 provided by Aaron Carass from JHU's IACL iacl.ece.jhu.edu
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: 3dClipLevel
Usage: 3dClipLevel [options] dataset
Estimates the value at which to clip the anatomical dataset so
  that background regions are set to zero.

The program's output is a single number sent to stdout.  This
  value can be 'captured' to a shell variable using the backward
  single quote operator; a trivial csh/tcsh example is

    set ccc = `3dClipLevel -mfrac 0.333 Elvis+orig`
    3dcalc -a Elvis+orig -expr "step(a-$ccc)" -prefix Presley

Algorithm:
  (a) Set some initial clip value using wizardry (AKA 'variance').
  (b) Find the median of all positive values >= clip value.
  (c) Set the clip value to 0.50 of this median.
  (d) Loop back to (b) until the clip value doesn't change.
This method was made up out of nothing, based on histogram gazing.

Options:
--------
  -mfrac ff = Use the number ff instead of 0.50 in the algorithm.

  -grad ppp = In addition to using the 'one size fits all routine',
              also compute a 'gradual' clip level as a function
              of voxel position, and output that to a dataset with
              prefix 'ppp'.
             [This is the same 'gradual' clip level that is now the
              default in 3dAutomask - as of 24 Oct 2006.
              You can use this option to see how 3dAutomask clips
              the dataset as its first step.  The algorithm above is
              is used in each octant of the dataset, and then these
              8 values are interpolated to cover the whole volume.]
Notes:
------
* Use at your own risk!  You might want to use the AFNI Histogram
    plugin to see if the results are reasonable.  This program is
    likely to produce bad results on images gathered with local
    RF coils, or with pulse sequences with unusual contrasts.

* For brain images, most brain voxels seem to be in the range from
    the clip level (mfrac=0.5) to about 3-3.5 times the clip level.
    - In T1-weighted images, voxels above that level are usually
      blood vessels (e.g., inflow artifact brightens them).

* If the input dataset has more than 1 sub-brick, the data is
    analyzed on the median volume -- at each voxel, the median
    of all sub-bricks at that voxel is computed, and then this
    median volume is used in the histogram algorithm.

* If the input dataset is short- or byte-valued, the output will
    be an integer; otherwise, the output is a float value.
------
Author: Emperor Zhark -- Sadistic Galactic Domination since 1994!


++ Compile date = Mar 13 2009




AFNI program: 3dConvolve
++ 3dConvolve: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
Program to calculate the voxelwise convolution of given impulse response   
function (IRF) time series contained in a 3d+time dataset with a specified 
input stimulus function time series.  This program will also calculate     
convolutions involving multiple IRF's and multiple stimulus functions.     
Input options include addition of system noise to the estimated output.    
Output consists of an AFNI 3d+time dataset which contains the estimated    
system response.  Alternatively, if all inputs are .1D time series files,  
then the output will be a single .1D time series file.                     
                                                                       
Usage:                                                                 
3dConvolve                                                             
-input fname         fname = filename of 3d+time template dataset      
[-input1D]           flag to indicate all inputs are .1D time series   
[-mask mname]        mname = filename of 3d mask dataset               
[-censor cname]      cname = filename of censor .1D time series        
[-concat rname]      rname = filename for list of concatenated runs    
[-nfirst fnum]       fnum = number of first time point to calculate by 
                       convolution procedure.  (default = max maxlag)  
[-nlast  lnum]       lnum = number of last time point to calculate by  
                       convolution procedure.  (default = last point)  
[-polort pnum]       pnum = degree of polynomial corresponding to the  
                       baseline model  (default: pnum = 1)             
[-base_file bname]   bname = file containing baseline parameters       
                                                                       
-num_stimts num      num = number of input stimulus time series        
                       (default: num = 0)                              
-stim_file k sname   sname = filename of kth time series input stimulus
[-stim_minlag k m]   m = minimum time lag for kth input stimulus       
                       (default: m = 0)                                
[-stim_maxlag k n]   n = maximum time lag for kth input stimulus       
                       (default: n = 0)                                
[-stim_nptr k p]     p = number of stimulus function points per TR     
                       Note: This option requires 0 slice offset times 
                       (default: p = 1)                                
                                                                       
[-iresp k iprefix]   iprefix = prefix of 3d+time input dataset which   
                       contains the kth impulse response function      
                                                                       
[-errts eprefix]     eprefix = prefix of 3d+time input dataset which   
                       contains the residual error time series         
                       (i.e., noise which will be added to the output) 
                                                                       
[-sigma s]           s = std. dev. of additive Gaussian noise          
                       (default: s = 0)                                
[-seed d]            d = seed for random number generator              
                       (default: d = 1234567)                          
                                                                       
[-xout]              flag to write X matrix to screen                  
[-output tprefix]    tprefix = prefix of 3d+time output dataset which  
                       will contain the convolved time series data     
                       (or tprefix = prefix of .1D output time series  
                       if the -input1D option is used)                 
                                                                       

++ Compile date = Mar 13 2009




AFNI program: 3dDFT
Usage: 3dDFT [-prefix ppp] [-abs] [-nfft N] [-detrend] dataset
   where dataset is complex or float valued.

 -abs     == output float dataset = abs(DFT)
 -nfft N  == use 'N' for DFT length (must be >= #time points)
 -detrend == least-squares remove linear drift before DFT
             [for more complex detrending, use 3dDetrend first]
 -taper f == taper 'f' fraction of data at ends (0 <= f <= 1).
             [Hamming 'raised cosine' taper of f/2 of the ]
             [data length at each end; default is no taper]

++ Compile date = Mar 13 2009




AFNI program: 3dDTeig
Usage: 3dDTeig [options] dataset
Computes eigenvalues and eigenvectors for an input dataset of
 6 sub-bricks Dxx,Dxy,Dyy,Dxz,Dyz,Dzz (lower diagonal order).
 The results are stored in a 14-subbrick bucket dataset.
 The resulting 14-subbricks are
  lambda_1,lambda_2,lambda_3,
  eigvec_1[1-3],eigvec_2[1-3],eigvec_3[1-3],
  FA,MD.

The output is a bucket dataset.  The input dataset
may use a sub-brick selection list, as in program 3dcalc.
 Options:
  -prefix pname = Use 'pname' for the output dataset prefix name.
    [default='eig']

  -datum type = Coerce the output data to be stored as the given type
    which may be byte, short or float. [default=float]

  -sep_dsets = save eigenvalues,vectors,FA,MD in separate datasets

  -uddata = tensor data is stored as upper diagonal instead of lower diagonal

 Mean diffusivity (MD) calculated as simple average of eigenvalues.
 Fractional Anisotropy (FA) calculated according to Pierpaoli C, Basser PJ.
 Microstructural and physiological features of tissues elucidated by
 quantitative-diffusion tensor MRI, J Magn Reson B 1996; 111:209-19

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dDWItoDT
Usage: 3dDWItoDT [options] gradient-file dataset
Computes 6 principle direction tensors from multiple gradient vectors
 and corresponding DTI image volumes.
 The program takes two parameters as input :  
    a 1D file of the gradient vectors with lines of ASCII floats Gxi,Gyi,Gzi.
    Only the non-zero gradient vectors are included in this file (no G0 line).
    a 3D bucket dataset with Np+1 sub-briks where the first sub-brik is the
    volume acquired with no diffusion weighting.
 Options:
   -prefix pname = Use 'pname' for the output dataset prefix name.
    [default='DT']

   -automask =  mask dataset so that the tensors are computed only for
    high-intensity (presumably brain) voxels.  The intensity level is
    determined the same way that 3dClipLevel works.

   -mask dset = use dset as mask to include/exclude voxels

   -nonlinear = compute iterative solution to avoid negative eigenvalues.
    This is the default method.

   -linear = compute simple linear solution.

   -reweight = recompute weight factors at end of iterations and restart

   -max_iter n = maximum number of iterations for convergence (Default=10).
    Values can range from -1 to any positive integer less than 101.
    A value of -1 is equivalent to the linear solution.
    A value of 0 results in only the initial estimate of the diffusion tensor
    solution adjusted to avoid negative eigenvalues.

   -max_iter_rw n = max number of iterations after reweighting (Default=5)
    values can range from 1 to any positive integer less than 101.

   -eigs = compute eigenvalues, eigenvectors, fractional anisotropy and mean
    diffusivity in sub-briks 6-19. Computed as in 3dDTeig

   -debug_briks = add sub-briks with Ed (error functional), Ed0 (orig. error),
     number of steps to convergence and I0 (modeled B0 volume)

   -cumulative_wts = show overall weight factors for each gradient level
    May be useful as a quality control

   -verbose nnnnn = print convergence steps every nnnnn voxels that survive to
    convergence loops (can be quite lengthy).

   -drive_afni nnnnn = show convergence graphs every nnnnn voxels that survive
    to convergence loops. AFNI must have NIML communications on (afni -niml)

   -sep_dsets = save tensor, eigenvalues,vectors,FA,MD in separate datasets

   -opt mname =  if mname is 'powell', use Powell's 2004 method for optimization
    If mname is 'gradient' use gradient descent method. If mname is 'hybrid',
    use combination of methods.
    MJD Powell, "The NEWUOA software for unconstrained optimization without
    derivatives", Technical report DAMTP 2004/NA08, Cambridge University
    Numerical Analysis Group -- http://www.damtp.cam.ac.uk/user/na/reports.html

 Example:
  3dDWItoDT -prefix rw01 -automask -reweight -max_iter 10 \
            -max_iter_rw 10 tensor25.1D grad02+orig.

 The output is a 6 sub-brick bucket dataset containing Dxx,Dxy,Dyy,Dxz,Dyz,Dzz
 (the lower triangular, row-wise elements of the tensor in symmetric matrix form)
 Additional sub-briks may be appended with the -eigs and -debug_briks options.
 These results are appropriate as the input to the 3dDTeig program.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.



AFNI program: 3dDeconvolve
++ 3dDeconvolve: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward, et al.

Program to calculate the deconvolution of a measurement 3D+time dataset 
with a specified input stimulus time series.  This program can also     
perform multiple linear regression using multiple input stimulus time   
series. Output consists of an AFNI 'bucket' type dataset containing     
(for each voxel)                                                        
 * the least squares estimates of the linear regression coefficients    
 * t-statistics for significance of the coefficients                    
 * partial F-statistics for significance of individual input stimuli    
 * the F-statistic for significance of the overall regression model     
The program can optionally output extra datasets containing             
 * the estimated impulse response function                              
 * the fitted model and error (residual) time series                    
------------------------------------------------------------------------
* Program 3dDeconvolve does Ordinary Least Squares (OLSQ) regression.   
* Program 3dREMLfit can be used to do Generalized Least Squares (GLSQ)  
  regression (AKA 'pre-whitened' least squares) combined with REML      
  estimation of an ARMA(1,1) temporal correlation structure:            
    http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dREMLfit.html   
* The input to 3dREMLfit is the .xmat.1D matrix file output by          
  3dDeconvolve, which also writes a 3dREMLfit command line to a file    
  to make it relatively easy to use the latter program.                 
* Nonlinear time series model fitting can be done with program 3dNLfim: 
    http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dNLfim.html     
* Preprocessing of the time series input can be done with various AFNI  
  programs, or with the 'uber-script' afni_proc.py:                     
    http://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html
------------------------------------------------------------------------
Consider the time series model  Z(t) = K(t)*S(t) + baseline + noise,    
where Z(t) = data                                                       
      K(t) = kernel (e.g., hemodynamic response function),              
      S(t) = stimulus                                                   
  baseline = constant, drift, etc.                                      
     and * = convolution                                                
Then 3dDeconvolve solves for K(t) given S(t).  If you want to process   
the reverse problem and solve for S(t) given the kernel K(t),           
use the program 3dTfitter.  The difference between the two cases is     
that K(t) is presumed to be causal and have limited support, whereas    
S(t) is a full-length time series.  Note that program 3dTfitter does    
not have all the capabilities of 3dDeconvolve for calculating output    
statistics; on the other hand, 3dTfitter can solve the deconvolution    
problem (in either direction) with L1 or L2 regression, and with sign   
constraints on the computed values (e.g., requiring output K(t) >= 0):  
  http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dTfitter.html     
------------------------------------------------------------------------

Usage Details:                                                         
3dDeconvolve command-line-arguments ...
                                                                       
**** Input data and control options:                                   
-input fname         fname = filename of 3D+time input dataset         
                       (more than  one filename  can be  given)        
                       (here,   and  these  datasets  will  be)        
                       (catenated  in time;   if you do this, )        
                       ('-concat' is not needed and is ignored)        
[-force_TR TR]       Use this value of TR instead of the one in        
                     the -input dataset.                               
                     (It's better to fix the input using 3drefit.)     
[-input1D dname]     dname = filename of single (fMRI) .1D time series 
[-TR_1D tr1d]        tr1d = TR for .1D time series [default 1.0 sec].  
                     This option has no effect without -input1D        
[-nodata [NT [TR]]   Evaluate experimental design only (no input data) 
[-mask mname]        mname = filename of 3d mask dataset               
[-automask]          build a mask automatically from input data        
                      (will be slow for long time series datasets)     
[-censor cname]      cname = filename of censor .1D time series        
[-CENSORTR clist]    clist = list of strings that specify time indexes 
                       to be removed from the analysis.  Each string is
                       of one of the following forms:                  
                           37 => remove global time index #37          
                         2:37 => remove time index #37 in run #2       
                       37..47 => remove global time indexes #37-47     
                       37-47  => same as above                         
                     2:37..47 => remove time indexes #37-47 in run #2  
                     *:0-2    => remove time indexes #0-2 in all runs  
                      +Time indexes within each run start at 0.        
                      +Run indexes start at 1 (just be to confusing).  
                      +Multiple -CENSORTR options may be used, or      
                        multiple -CENSORTR strings can be given at     
                        once, separated by spaces or commas.           
                      +N.B.: 2:37,47 means index #37 in run #2 and     
                        global time index 47; it does NOT mean         
                        index #37 in run #2 AND index #47 in run #2.   
[-concat rname]      rname = filename for list of concatenated runs    
[-nfirst fnum]       fnum = number of first dataset image to use in the
                       deconvolution procedure. [default = max maxlag] 
[-nlast  lnum]       lnum = number of last dataset image to use in the 
                       deconvolution procedure. [default = last point] 
[-polort pnum]       pnum = degree of polynomial corresponding to the  
                       null hypothesis  [default: pnum = 1]            
                       If you use 'A' for pnum, the program will       
                       automatically choose a value based on the       
                       duration of the longest run.                    
[-legendre]          use Legendre polynomials for null hypothesis      
[-nolegendre]        use power polynomials for null hypotheses         
                       [default is -legendre]                          
[-nodmbase]          don't de-mean baseline time series                
                       (i.e., polort>1 and -stim_base inputs)          
[-dmbase]            de-mean baseline time series [default if polort>0]
[-svd]               Use SVD instead of Gaussian elimination [default] 
[-nosvd]             Use Gaussian elimination instead of SVD           
[-rmsmin r]          r = minimum rms error to reject reduced model     
[-nocond]            DON'T calculate matrix condition number           
                      ** This is NOT the same as Matlab!               
[-singvals]          Print out the matrix singular values              
[-GOFORIT [g]]       Use this to proceed even if the matrix has        
                     bad problems (e.g., duplicate columns, large      
                     condition number, etc.).                          
               *N.B.: Warnings that you should particularly heed have  
                      the string '!!' somewhere in their text.         
               *N.B.: Error and Warning messages go to stderr and      
                      also to file 3dDeconvolve.err.               
               *N.B.: The optional number 'g' that appears is the      
                      number of warnings that can be ignored.          
                      That is, if you use -GOFORIT 7 and 9 '!!'        
                      matrix warnings appear, then the program will    
                      not run.  If 'g' is not present, 1 is used.      
[-allzero_OK]        Don't consider all zero matrix columns to be      
                      the type of error that -GOFORIT is needed to     
                      ignore.                                          
[-Dname=val]       = Set environment variable 'name' to 'val' for this 
                     run of the program only.                          
                                                                       
**** Input stimulus options:                                           
-num_stimts num      num = number of input stimulus time series        
                       (0 <= num)   [default: num = 0]                 
-stim_file k sname   sname = filename of kth time series input stimulus
[-stim_label k slabel] slabel = label for kth input stimulus           
[-stim_base k]       kth input stimulus is part of the baseline model  
[-stim_minlag k m]   m = minimum time lag for kth input stimulus       
                       [default: m = 0]                                
[-stim_maxlag k n]   n = maximum time lag for kth input stimulus       
                       [default: n = 0]                                
[-stim_nptr k p]     p = number of stimulus function points per TR     
                       Note: This option requires 0 slice offset times 
                       [default: p = 1]                                
                                                                       
[-stim_times k tname Rmodel]                                           
   Generate the k-th response model from a set of stimulus times       
   given in file 'tname'.  The response model is specified by the      
   'Rmodel' argument, which can be one of the following:               
     'GAM(p,q)'    = 1 parameter gamma variate                         
                         (t/(p*q))^p * exp(p-t/q)                      
                       Defaults: p=8.6 q=0.547 if only 'GAM' is used   
     'SPMG1'       = 1 parameter SPM gamma variate basis function      
                         exp(-t)*(A1*t^P1-A2*t^P2) where               
                       A1 = 0.0083333333  P1 = 5  (main positive lobe) 
                       A2 = 1.274527e-13  P2 = 15 (undershoot part)    
                       This function is NOT normalized to have peak=1! 
     'SPMG2'       = 2 parameter SPM: gamma variate + d/dt derivative  
                       [For backward compatibility: 'SPMG' == 'SPMG2'] 
     'SPMG3'       = 3 parameter SPM basis function set                
     'POLY(b,c,n)' = n parameter Legendre polynomial expansion         
                       from times b..c after stimulus time             
                       [Max value of n is 20]                          
     'SIN(b,c,n)'  = n parameter sine series expansion                 
                       from times b..c after stimulus time             
     'TENT(b,c,n)' = n parameter tent function expansion               
                       from times b..c after stimulus time             
    'CSPLIN(b,c,n)'= n parameter cubic spline function expansion       
                       from times b..c after stimulus time             
                     ** TENT and CSPLIN are 'cardinal' interpolation   
                        functions; CSPLIN is a drop-in upgrade of      
                        TENT to a differentiable set of functions.     
     'BLOCK(d,p)'  = 1 parameter block stimulus of duration 'd'        
                    ** There are 2 variants of BLOCK:                  
                         BLOCK4 [the default] and BLOCK5               
                       which have slightly different delays:           
                         HRF(t) = int( g(t-s) , s=0..min(t,d) )        
                       where g(t) = t^q * exp(-t) /(q^q*exp(-q))       
                       and q = 4 or 5.  The case q=5 is delayed by     
                       about 1 second from the case q=4.               
                    ** Despite the name, you can use 'BLOCK' for event-
                       related analyses just by setting the duration to
                       a small value; e.g., 'BLOCK5(1,1)'              
                    ** The 'p' parameter is the amplitude of the       
                       response function, and should usually be set to 
                       1.  If 'p' is omitted, the amplitude will depend
                       on the duration 'd', which is useful only in    
                       special circumstances.                          
     'WAV(d)'      = 1 parameter block stimulus of duration 'd'.       
                      * This is the '-WAV' function from program waver!
                      * If you wish to set the shape parameters of the 
                        WAV function, you can do that by adding extra  
                        arguments, in the order                        
                         delay time , rise time , fall time ,          
                         undershoot fraction, undershoot restore time  
                      * The default values are 'WAV(d,2,4,6,0.2,2)'    
                      * Omitted parameters get the default values.     
                      * 'WAV(d,,,,0)' (setting undershoot=0) is        
                        very similar to 'BLOCK5(d)', for d > 0.        
                      * Setting duration d to 0 (or just using 'WAV')  
                        gives the pure '-WAV' impulse response function
                        from waver.                                    
                      * If d > 0, the WAV(0) function is convolved with
                        a square wave of duration d to make the HRF,   
                        and the amplitude is scaled back down to 1.    
     'EXPR(b,c) exp1 ... expn' = n parameter; arbitrary expressions    
                       from times b..c after stimulus time             
                                                                       
 * 3dDeconvolve does LINEAR regression, so the model parameters are    
   amplitudes of the basis functions; 1 parameter models are 'simple'  
   regression, where the shape of the impulse response function is     
   fixed and only the magnitude/amplitude varies.  Models with more    
   free parameters have 'variable' shape impulse response functions.   
 * If you want NONLINEAR regression, see program 3dNLfim.              
                                                                       
 * For the format of the 'tname' file, see the last part of            
 http://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/DeconSummer2004.html 
   and also see the other documents stored in the directory below:     
 http://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/                     
   and also read the presentation below:                               
 http://afni.nimh.nih.gov/pub/dist/edu/latest/afni_handouts/afni05_regression.pdf
   ** Note Well:                                                       
    * The contents of the 'tname' file are NOT just 0s and 1s,         
      but are the actual times of the stimulus events.                 
    * You can give the times on the command line by using a string     
      of the form '1D: 3.2 7.9 | 8.2 16.2 23.7' in place of 'tname',   
      where the '|' character indicates the start of a new line        
      (so this example is for a case with 2 catenated runs).           
    * You cannot use the '1D:' form of input for any of the more       
      complicated '-stim_times_*' options below!                       
    * It is a good idea to examine the shape of the response models    
      if you are unsure of what the different functions will look like.
      You can graph columns from the .xmat.1D matrix file with 1dplot; 
      for example, comparing 'WAV(10)' and 'BLOCK5(10,1)':             
       3dDeconvolve -nodata 200 1.0 -num_stimts 2 -polort -1         \
                    -stim_times 1 '1D: 10 60 110 160' 'WAV(10)'      \
                    -stim_times 2 '1D: 10 60 110 160' 'BLOCK5(10,1)' \
                    -x1D stdout: | 1dplot -one -stdin                  
                                                                       
[-stim_times_AM1 k tname Rmodel]                                       
   Similar, but generates an amplitude modulated response model.       
   The 'tname' file should consist of 'time*amplitude' pairs.          
[-stim_times_AM2 k tname Rmodel]                                       
   Similar, but generates 2 response models: one with the mean         
   amplitude and one with the differences from the mean.               
                                                                       
** NOTE [04 Dec 2008] **                                               
 -stim_times_AM1 and -stim_times_AM2 now take files with more          
   than 1 amplitude attached to each time; for example,                
     33.7*9,-2,3                                                       
   indicates a stimulus at time 33.7 seconds with 3 amplitudes         
   attached (9 and -2 and 3).  In this example, -stim_times_AM2 would  
   generate 4 response models: 1 for the constant response case        
   and 1 scaled by each of the amplitude sets.                         
 For more information, see                                             
   http://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/AMregression.pdf   
                                                                       
** NOTE [08 Dec 2008] **                                               
 -stim_times_AM1 and -stim_times_AM2 now have 1 extra response model   
 function available:                                                   
   dmBLOCK (or dmBLOCK4 or dmBLOCK5)                                   
 where 'dm' means 'duration modulated'.  If you use this response      
 model, then the LAST married parameter in the timing file will        
 be used to modulate the duration of the block stimulus.  Any          
 earlier parameters will be used to modulate the amplitude,            
 and should be separated from the duration parameter by a ':'          
 character, as in '30*5,3:12' which means (for dmBLOCK):               
   a block starting at 30 s,                                           
   with amplitude parameters 5 and 3,                                  
   and with duration 12 s.                                             
 The unmodulated peak response of dmBLOCK is set to 1.                 
 *N.B.: the maximum allowed dmBLOCK duration is 999 s.                 
 *N.B.: you can also use dmBLOCK with -stim_times_IM, in which case    
        each time in the 'tname' file should have just one extra       
        parameter -- the duration -- married to it, as in '30:15',     
        meaning a block of duration 15 seconds starting at t=30 s.     
 For some graphs of what dmBLOCK regressors look like, see             
   http://afni.nimh.nih.gov/pub/dist/doc/misc/Decon/AMregression.pdf   
                                                                       
[-stim_times_IM k tname Rmodel]                                        
   Similar, but each separate time in 'tname' will get a separate      
   regressor; 'IM' means 'Individually Modulated' -- that is, each     
   event will get its own amplitude(s).  Presumably you will collect   
   these many amplitudes afterwards and do some sort of statistics     
   on them.                                                            
 *N.B.: Each time in the 'tname' file will get a separate regressor.   
        If some time is outside the duration of the imaging run(s),    
        or if the response model for that time happens to hit only     
        censored-out data values, then the corresponding regressor     
        will be all zeros.  Normally, 3dDeconvolve will not run        
        if the matrix has any all zero columns.  To carry out the      
        analysis, use the '-allzero_OK' option.  Amplitude estimates   
        for all zero columns will be zero, and should be excluded      
        from any subsequent analysis.  (Probably you should fix the    
        times in the 'tname' file instead of using '-allzero_OK'.)     
                                                                       
[-global_times]                                                        
[-local_times]                                                         
   By default, 3dDeconvolve guesses whether the times in the 'tname'   
   files for the various '-stim_times' options are global times        
   (relative to the start of run #1) or local times (relative to       
   the start of each run).  With one of these options, you can force   
   the times to be considered as global or local for '-stim_times'     
   options that are AFTER the '-local_times' or '-global_times'.       
                                                                       
[-basis_normall a]                                                     
   Normalize all basis functions for '-stim_times' to have             
   amplitude 'a' (must have a > 0).  The peak absolute value           
   of each basis function will be scaled to be 'a'.                    
   NOTE: -basis_normall only affect -stim_times options that           
         appear LATER on the command line                              
                                                                       
**** General linear test (GLT) options:                                
-num_glt num         num = number of general linear tests (GLTs)       
                       (0 <= num)   [default: num = 0]                 
                  **N.B.: You only need this option if you have        
                          more than 10 GLTs specified; the program     
                          has built-in space for 10 GLTs, and          
                          this option is used to expand that space.    
                          If you use this option, you should place     
                          it on the command line BEFORE any of the     
                          other GLT options.                           
[-glt s gltname]     Perform s simultaneous linear tests, as specified 
                       by the matrix contained in file gltname         
[-glt_label k glabel]  glabel = label for kth general linear test      
[-gltsym gltname]    Read the GLT with symbolic names from the file    
                                                                       
[-TR_irc dt]                                                           
   Use 'dt' as the stepsize for computation of integrals in -IRC_times 
   options.  Default is to use value given in '-TR_times'.             
                                                                       
**** Options for output 3D+time datasets:                              
[-iresp k iprefix]   iprefix = prefix of 3D+time output dataset which  
                       will contain the kth estimated impulse response 
[-tshift]            Use cubic spline interpolation to time shift the  
                       estimated impulse response function, in order to
                       correct for differences in slice acquisition    
                       times. Note that this effects only the 3D+time  
                       output dataset generated by the -iresp option.  
[-sresp k sprefix]   sprefix = prefix of 3D+time output dataset which  
                       will contain the standard deviations of the     
                       kth impulse response function parameters        
[-fitts  fprefix]    fprefix = prefix of 3D+time output dataset which  
                       will contain the (full model) time series fit   
                       to the input data                               
[-errts  eprefix]    eprefix = prefix of 3D+time output dataset which  
                       will contain the residual error time series     
                       from the full model fit to the input data       
[-TR_times dt]                                                         
   Use 'dt' as the stepsize for output of -iresp and -sresp file       
   for response models generated by '-stim_times' options.             
   Default is same as time spacing in the '-input' 3D+time dataset.    
   The units here are in seconds!                                      
                                                                       
**** Options to control the contents of the output bucket dataset:     
[-fout]            Flag to output the F-statistics                     
[-rout]            Flag to output the R^2 statistics                   
[-tout]            Flag to output the t-statistics                     
[-vout]            Flag to output the sample variance (MSE) map        
[-nobout]          Flag to suppress output of baseline coefficients    
                     (and associated statistics) [** DEFAULT **]       
[-bout]            Flag to turn on output of baseline coefs and stats. 
[-nocout]          Flag to suppress output of regression coefficients  
                     (and associated statistics)                       
[-full_first]      Flag to specify that the full model statistics will 
                     be first in the bucket dataset [** DEFAULT **]    
[-nofull_first]    Flag to specify that full model statistics go last  
[-nofullf_atall]   Flag to turn off the full model F statistic         
                     ** DEFAULT: the full F is always computed, even if
                     sub-model partial F's are not ordered with -fout. 
[-bucket bprefix]  Create one AFNI 'bucket' dataset containing various 
                     parameters of interest, such as the estimated IRF 
                     coefficients, and full model fit statistics.      
                     Output 'bucket' dataset is written to bprefix.    
[-nobucket]        Don't output a bucket dataset.  By default, the     
                     program uses '-bucket Decon' if you don't give    
                     either -bucket or -nobucket on the command line.  
[-noFDR]           Don't compute the statistic-vs-FDR curves for the   
                     bucket dataset.                                   
                     [same as 'setenv AFNI_AUTOMATIC_FDR NO']          
                                                                       
[-xsave]           Flag to save X matrix into file bprefix.xsave       
                     (only works if -bucket option is also given)      
[-noxsave]         Don't save X matrix [this is the default]           
[-cbucket cprefix] Save the regression coefficients (no statistics)    
                     into a dataset named 'cprefix'.  This dataset     
                     will be used in a -xrestore run instead of the    
                     bucket dataset, if possible.                      
                   Also, the -cbucket and -x1D output can be combined  
                     in 3dSynthesize to produce 3D+time datasets that  
                     are derived from subsets of the regression model  
                     [generalizing the -fitts option, which produces]  
                     [a 3D+time dataset derived from the full model].  
                                                                       
[-xrestore f.xsave] Restore the X matrix, etc. from a previous run     
                     that was saved into file 'f.xsave'.  You can      
                     then carry out new -glt tests.  When -xrestore    
                     is used, most other command line options are      
                     ignored.                                          
                                                                       
[-float]            Write output datasets in float format, instead of  
                    as scaled shorts.                                  
[-short]            Write output as scaled shorts [default, for now]   
                                                                       
**** The following options control the screen output only:             
[-quiet]             Flag to suppress most screen output               
[-xout]              Flag to write X and inv(X'X) matrices to screen   
[-xjpeg filename]    Write a JPEG file graphing the X matrix           
                     * If filename ends in '.png', a PNG file is output
[-x1D filename]      Save X matrix to a .xmat.1D (ASCII) file [default]
[-nox1D]             Don't save X matrix                               
[-x1D_uncensored ff  Save X matrix to a .xmat.1D file, but WITHOUT     
                     ANY CENSORING.  Might be useful in 3dSynthesize.  
[-x1D_stop]          Stop running after writing .xmat.1D files.        
[-progress n]        Write statistical results for every nth voxel     
[-fdisp fval]        Write statistical results for those voxels        
                       whose full model F-statistic is > fval          

 -jobs J   Run the program with 'J' jobs (sub-processes).
             On a multi-CPU machine, this can speed the
             program up considerably.  On a single CPU
             machine, using this option is silly.
             J should be a number from 1 up to the
             number of CPUs sharing memory on the system.
             J=1 is normal (single process) operation.
             The maximum allowed value of J is 32.
         * For more information on parallelizing, see
           http://afni.nimh.nih.gov/afni/doc/misc/afni_parallelize
         * Use -mask or -automask to get more speed; cf. 3dAutomask.

** NOTE **
This version of the program has been compiled to use
double precision arithmetic for most internal calculations.

++ Compile date = Mar 13 2009




AFNI program: 3dDeconvolve_f
**
** 3dDeconvolve_f is now disabled by default.
** It is dangerous, due to roundoff problems.
** Please use 3dDeconvolve from now on!
**
** HOWEVER, if you insist on using 3dDeconvolve_f, then:
**        + Use '-OK' as the first command line option.
**        + Check the matrix condition number;
**            if it is greater than 100, BEWARE!
**
** RWCox - July 2004
**



AFNI program: 3dDespike
Usage: 3dDespike [options] dataset
Removes 'spikes' from the 3D+time input dataset and writes
a new dataset with the spike values replaced by something
more pleasing to the eye.

Method:
 * L1 fit a smooth-ish curve to each voxel time series
    [see -corder option for description of the curve].
 * Compute the MAD of the difference between the curve and
    the data time series (the residuals).
 * Estimate the standard deviation 'sigma' of the residuals
    as sqrt(PI/2)*MAD.
 * For each voxel value, define s = (value-curve)/sigma.
 * Values with s > c1 are replaced with a value that yields
    a modified s' = c1+(c2-c1)*tanh((s-c1)/(c2-c1)).
 * c1 is the threshold value of s for a 'spike' [default c1=2.5].
 * c2 is the upper range of the allowed deviation from the curve:
    s=[c1..infinity) is mapped to s'=[c1..c2)   [default c2=4].

Options:
 -ignore I  = Ignore the first I points in the time series:
               these values will just be copied to the
               output dataset [default I=0].
 -corder L  = Set the curve fit order to L:
               the curve that is fit to voxel data v(t) is

                       k=L [        (2*PI*k*t)          (2*PI*k*t) ]
 f(t) = a+b*t+c*t*t + SUM  [ d * sin(--------) + e * cos(--------) ]
                       k=1 [  k     (    T   )    k     (    T   ) ]

               where T = duration of time series;
               the a,b,c,d,e parameters are chosen to minimize
               the sum over t of |v(t)-f(t)| (L1 regression);
               this type of fitting is is insensitive to large
               spikes in the data.  The default value of L is
               NT/30, where NT = number of time points.

 -cut c1 c2 = Alter default values for the spike cut values
               [default c1=2.5, c2=4.0].
 -prefix pp = Save de-spiked dataset with prefix 'pp'
               [default pp='despike']
 -ssave ttt = Save 'spikiness' measure s for each voxel into a
               3D+time dataset with prefix 'ttt' [default=no save]
 -nomask    = Process all voxels
               [default=use a mask of high-intensity voxels, ]
               [as created via '3dAutomask -dilate 4 dataset'].
 -q[uiet]   = Don't print '++' informational messages.

 -localedit = Change the editing process to the following:
                If a voxel |s| value is >= c2, then replace
                the voxel value with the average of the two
                nearest non-spike (|s| < c2) values; the first
                one previous and the first one after.
                Note that the c1 cut value is not used here.

Caveats:
* Despiking may interfere with image registration, since head
   movement may produce 'spikes' at the edge of the brain, and
   this information would be used in the registration process.
   This possibility has not been explored or calibrated.
* Check your data visually before and after despiking and
   registration!
   [Hint: open 2 AFNI controllers, and turn Time Lock on.]

++ Compile date = Mar 13 2009




AFNI program: 3dDetrend
Usage: 3dDetrend [options] dataset
* This program removes components from voxel time series using
  linear least squares.  Each voxel is treated independently.
* Note that least squares detrending is equivalent to orthogonalizing
  the input dataset time series with respect to the basis time series
  provided by the '-vector', '-polort', et cetera options.
* The input dataset may have a sub-brick selector string; otherwise,
  all sub-bricks will be used.

General Options:
 -prefix pname = Use 'pname' for the output dataset prefix name.
                   [default='detrend']
 -session dir  = Use 'dir' for the output dataset session directory.
                   [default='./'=current working directory]
 -verb         = Print out some verbose output as the program runs.
 -replace      = Instead of subtracting the fit from each voxel,
                   replace the voxel data with the time series fit.
 -normalize    = Normalize each output voxel time series; that is,
                   make the sum-of-squares equal to 1.
           N.B.: This option is only valid if the input dataset is
                   stored as floats! (1D files are always floats.)
 -byslice      = Treat each input vector (infra) as describing a set of
                   time series interlaced across slices.  If NZ is the
                   number of slices and NT is the number of time points,
                   then each input vector should have NZ*NT values when
                   this option is used (usually, they only need NT values).
                   The values must be arranged in slice order, then time
                   order, in each vector column, as shown here:
                       f(z=0,t=0)       // first slice, first time
                       f(z=1,t=0)       // second slice, first time
                       ...
                       f(z=NZ-1,t=0)    // last slice, first time
                       f(z=0,t=1)       // first slice, second time
                       f(z=1,t=1)       // second slice, second time
                       ...
                       f(z=NZ-1,t=NT-1) // last slice, last time

Component Options:
These options determine the components that will be removed from
each dataset voxel time series.  They may be repeated to specify
multiple regression.  At least one component must be specified.

 -vector vvv   = Remove components proportional to the columns vectors
                   of the ASCII *.1D file 'vvv'.  You may use a
                   sub-vector selector string to specify which columns
                   to use; otherwise, all columns will be used.
                   For example:
                    -vector 'xyzzy.1D[3,5]'
                   will remove the 4th and 6th columns of file xyzzy.1D
                   from the dataset (sub-vector indexes start at 0).

 -expr eee     = Remove components proportional to the function
                   specified in the expression string 'eee'.
                   Any single letter from a-z may be used as the
                   independent variable in 'eee'.  For example:
                    -expr 'cos(2*PI*t/40)' -expr 'sin(2*PI*t/40)'
                   will remove sine and cosine waves of period 40
                   from the dataset.

 -polort ppp   = Add Legendre polynomials of order up to and
                   including 'ppp' in the list of vectors to remove.

 -del ddd      = Use the numerical value 'ddd' for the stepsize
                   in subsequent -expr options.  If no -del option
                   is ever given, then the TR given in the dataset
                   header is used for 'ddd'; if that isn't available,
                   then 'ddd'=1.0 is assumed.  The j-th time point
                   will have independent variable = j * ddd, starting
                   at j=0.  For example:
                     -expr 'sin(x)' -del 2.0 -expr 'z**3'
                   means that the stepsize in 'sin(x)' is delta-x=TR,
                   but the stepsize in 'z**3' is delta-z = 2.

 N.B.: expressions are NOT calculated on a per-slice basis when the
        -byslice option is used.  If you have to do this, you could
        compute vectors with the required time series using 1deval.

Detrending 1D files
-------------------
As far as '3d' programs are concerned, you can input a 1D file as
a 'dataset'.  Each row is a separate voxel, and each column is a
separate time point.  If you want to detrend a single column, then
you need to transpose it on input.  For example:

  3dDetrend -prefix - -vector G1.1D -polort 3 G5.1D\' | 1dplot -stdin

Note that the '-vector' file is NOT transposed with \', but that
the input dataset file IS transposed.  This is because in the first
case the program expects a 1D file, and so knows that the column
direction is time.  In the second case, the program expects a 3D
dataset, and when given a 1D file, knows that the row direction is
time -- so it must be transposed.  I'm sorry if this is confusing,
but that's the way it is.

++ Compile date = Mar 13 2009




AFNI program: 3dEmpty
Usage: 3dEmpty [options]
Makes an 'empty' dataset .HEAD file.

Options:
=======
 -prefix p   = Prefix name for output file
 -nxyz x y z = Set number of voxels to be 'x', 'y', and 'z'
                 along the 3 axes [defaults=64]
 -nt         = Number of time points [default=1]

* Other dataset parameters can be changed with 3drefit.
* The purpose of this program (combined with 3drefit) is to
  allow you to make up an AFNI header for an existing file.


++ Compile date = Mar 13 2009




AFNI program: 3dEntropy
Usage: 3dEntropy dataset ...

++ Compile date = Mar 13 2009




AFNI program: 3dErrtsCormat
Usage: 3dErrtsCormat [options] dset

Computes the correlation (not covariance) matrix corresponding
to the residual (or error) time series in 'dset', which will
usually be the '-errts' output from 3dDeconvolve.  The output
is a 1D file of the Toeplitz entries (to stdout).

Options:
  -concat rname  = as in 3dDeconvolve
  -input  dset   = alternate way of telling what dataset to read
  -mask   mset   = mask dataset
  -maxlag mm     = set maximum lag
  -polort pp     = set polort level (default=0)

-- RWCox -- June 2008 -- for my own pleasant purposes
-- Also see program 3dLocalCormat to do this on each voxel,
   and optionally estimate the ARMA(1,1) model parameters.

++ Compile date = Mar 13 2009




AFNI program: 3dExtrema
++ 3dExtrema: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program finds local extrema (minima or maxima) of the input       
dataset values for each sub-brick of the input dataset.  The extrema   
may be determined either for each volume, or for each individual slice.
Only those voxels whose corresponding intensity value is greater than  
the user specified data threshold will be considered.                  

Usage: 3dExtrema  options  datasets                                  
where the options are:                                                 
-prefix pname    = Use 'pname' for the output dataset prefix name.     
  OR                 [default = NONE; only screen output]              
-output pname                                                          
                                                                       
-session dir     = Use 'dir' for the output dataset session directory. 
                     [default='./'=current working directory]          
                                                                       
-quiet           = Flag to suppress screen output                      
                                                                       
-mask_file mname = Use mask statistic from file mname.                 
                   Note: If file mname contains more than 1 sub-brick, 
                   the mask sub-brick must be specified!               
-mask_thr m        Only voxels whose mask statistic is >= m            
                   in absolute value will be considered.               
                   A default value of 1 is assumed.                    
                                                                       
-data_thr d        Only voxels whose value (intensity) is greater      
                   than d in absolute value will be considered.        
                                                                       
-sep_dist d        Min. separation distance [mm] for distinct extrema  
                                                                       
Choose type of extrema (one and only one choice):                      
-minima            Find local minima.                                  
-maxima [default]  Find local maxima.                                  
                                                                       
Choose form of binary relation (one and only one choice):              
-strict [default]  >  for maxima,  <  for minima                       
-partial           >= for maxima,  <= for minima                       
                                                                       
Choose boundary criteria (one and only one choice):                    
-interior [default]Extrema must be interior points (not on boundary)   
-closure           Extrema may be boundary points                      
                                                                       
Choose domain for finding extrema (one and only one choice):           
-slice [default]   Each slice is considered separately                 
-volume            The volume is considered as a whole                 
                                                                       
Choose option for merging of extrema (one and only one choice):        
-remove [default]  Remove all but strongest of neighboring extrema     
-average           Replace neighboring extrema by average              
-weight            Replace neighboring extrema by weighted average     
                                                                       
Command line arguments after the above are taken to be input datasets. 

 Examples: 
  Compute maximum value in amygdala region of Talairach-transformed dataset
    3dExtrema -volume -closure -sep_dist 512 \ 
      -mask_file 'TT_Daemon::amygdala' func_slim+tlrc.'[0]'
  Show minimum voxel values not on edge of mask, where the mask >= 0.95
    3dExtrema -minima -volume -mask_file 'statmask+orig' \ 
      -mask_thr 0.95 func_slim+tlrc.'[0]'


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dFDR
++ 3dFDR: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program implements the False Discovery Rate (FDR) algorithm for       
thresholding of voxelwise statistics.                                      
                                                                           
Program input consists of a functional dataset containing one (or more)    
statistical sub-bricks.  Output consists of a bucket dataset with one      
sub-brick for each input sub-brick.  For non-statistical input sub-bricks, 
the output is a copy of the input.  However, statistical input sub-bricks  
are replaced by their corresponding FDR values, as follows:                
                                                                           
For each voxel, the minimum value of q is determined such that             
                               E(FDR) <= q                                 
leads to rejection of the null hypothesis in that voxel. Only voxels inside
the user specified mask will be considered.  These q-values are then mapped
to z-scores for compatibility with the AFNI statistical threshold display: 
                                                                           
               stat ==> p-value ==> FDR q-value ==> FDR z-score            
                                                                           
Usage:                                                                     
  3dFDR                                                                    
    -input fname       fname = filename of input 3d functional dataset     
      OR                                                                   
    -input1D dname     dname = .1D file containing column of p-values      
                                                                           
    -mask_file mname   Use mask values from file mname.                    
                       Note: If file mname contains more than 1 sub-brick, 
                       the mask sub-brick must be specified!               
                       Default: No mask                                    
                       N.B.: may also be abbreviated to '-mask'            
                                                                           
    -mask_thr m        Only voxels whose corresponding mask value is       
                       greater than or equal to m in absolute value will   
                       be considered.  Default: m=1                        
                                                                           
                       Constant c(N) depends on assumption about p-values: 
    -cind              c(N) = 1   p-values are independent across N voxels 
    -cdep              c(N) = sum(1/i), i=1,...,N   any joint distribution 
                       Default:  c(N) = 1                                  
                                                                           
    -quiet             Flag to suppress screen output                      
                                                                           
    -list              Write sorted list of voxel q-values to screen       
                                                                           
    -prefix pname      Use 'pname' for the output dataset prefix name.     
      OR                                                                   
    -output pname                                                          
                                                                           

===========================================================================

January 2008: Changes to 3dFDR
------------------------------
The default mode of operation of 3dFDR has altered somewhat:

 * Voxel p-values of exactly 1 (e.g., from t=0 or F=0 or correlation=0)
     are ignored by default; in the old mode of operation, they were
     included in the count which goes into the FDR algorithm.  The old
     process tends to increase the q-values and so decrease the z-scores.

 * The array of voxel p-values are now sorted via Quicksort, rather than
     by binning, as in the old mode.  This probably has no discernible
     effect on the results.

New Options:
------------
    -old     = Use the old mode of operation
    -new     = Use the new mode of operation [now the default]
                N.B.: '-list' does not work in the new mode!
    -pmask   = Instruct the program to ignore p=1 voxels
                [the default in the new mode, but not in the old mode]
               N.B.: voxels that were masked in 3dDeconvolve (etc.)
                     will have their statistics set to 0, which means p=1,
                     which means that such voxels are implicitly masked
                     with '-new', and so don't need to be explicitly
                     masked with the '-mask' option.
    -nopmask = Instruct the program to count p=1 voxels
                [the default in the old mode, but not in the new mode]
    -force   = Force the conversion of all sub-bricks, even if they
                are not marked as with a statistical code; such
                sub-bricks are treated as though they were p-values.
    -float   = Force the output of z-scores in floating point format.
    -qval    = Force the output of q-values rather than z-scores.
                N.B.: A smaller q-value is more significant!
                [-float is recommended when -qval is used]

* To be clear, you can use '-new -nopmask' to have the new mode of computing
   carried out, but with p=1 voxels included (which should give results
   virtually identical to '-old').

* Or you can use '-old -pmask' to use the old mode of computing but where
   p=1 voxels are not counted (which should give results virtually
   identical to '-new').

* However, the combination of '-new', '-nopmask' and '-mask_file' does not
   work -- if you try it, '-pmask' will be turned back on and a warning
   message printed to aid your path towards elucidation and enlightenment.

Other Notes:
------------
* '3drefit -addFDR' can be used to add FDR curves of z(q) as a function
    of threshold for all statistic sub-bricks in a dataset; in turn, these
    curves let you see the (estimated) q-value as you move the threshold
    slider in AFNI.
   - Since 3drefit doesn't have a '-mask' option, you will have to mask
     statistical sub-bricks yourself via 3dcalc (if desired):
       3dcalc -a stat+orig -b mask+orig -expr 'a*step(b)' -prefix statmm
   - '-addFDR' runs as if '-new -pmask' were given to 3dFDR, so that
     stat values == 0 are ignored in the FDR calculations.

* q-values are estimates of the False Discovery Rate at a given threshold;
   that is, about 5% of all voxels with q <= 0.05 (z >= 1.96) are
   (presumably) 'false positive' detections, and the other 95% are
   (presumably) 'true positives'.  Of course, there is no way to tell
   which above-threshold voxels are 'true' detections and which are 'false'.

* Note the use of the words 'estimate' and 'about' in the above statement!
   In particular, the accuracy of the q-value calculation depends on the
   assumption that the p-values calculated from the input statistics are
   correctly distributed (e.g., that the DOF parameters are correct).

* The z-score is the conversion of the q-value to a double-sided tail
   probability of the unit Gaussian N(0,1) distribution; that is, z(q)
   is the value such that if x is a N(0,1) random variable, then
   Prob[|x|>z] = q: for example, z(0.05) = 1.95996.

* cf. http://en.wikipedia.org/wiki/False_discovery_rate
* cf. http://afni.nimh.nih.gov/pub/dist/doc/misc/FDR/FDR_Jan2008.pdf
* cf. C source code in mri_fdrize.c
* changes by RWCox -- 18 Jan 2008 == Cary Grant's Birthday!


++ Compile date = Mar 13 2009




AFNI program: 3dFWHM
++ 3dFWHM: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
*+ WARNING: This program is obsolete!  Use 3dFWHMx instead!
This program estimates the Filter Width Half Maximum (FWHM).  

Usage: 
3dFWHM 
-dset file     file  = name of input AFNI 3d dataset 
[-mask mname]  mname = filename of 3d mask dataset   
[-quiet]       suppress screen output                
[-out file]    file  = name of output file           

[-compat] = Be compatible with the older 3dFWHM, where if a
            voxel is in the mask, then its neighbors are used
            for differencing, even if they are not themselves in
            the mask.  This was an error; now, neighbors must also
            be in the mask to be used in the differencing.
            Use '-compat' to use the older method [for comparison].
         ** This change made 09 Nov 2006.

ALSO SEE:
 - 3dFWHMx, which can deal with multi-brick datasets
 - 3dLocalstat -stat FWHM, which can estimate the FWHM at each voxel
3dFWHM itself will no longer be upgraded.  Any future improvements
will be made to 3dFWHMx.  **** PLEASE SWITCH TO THAT PROGRAM ****

INPUT FILE RECOMMENDATIONS:
For FMRI statistical purposes, you DO NOT want the FWHM to reflect
the spatial structure of the underlying anatomy.  Rather, you want
the FWHM to reflect the spatial structure of the noise.  This means
that the input dataset should not have anatomical structure.  One
good form of input is the output of '3dDeconvolve -errts', which is
the residuals left over after the GLM fitted signal model is subtracted
out from each voxel's time series.  If you don't want to go to that
trouble, use the output of 3dDetrend for the same purpose.  But just
giving a raw EPI dataset to this program will produce un-useful values.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dFWHMx
Usage: 3dFWHMx [options] dataset

Unlike the older 3dFWHM, this program computes FWHMs for all sub-bricks
in the input dataset, each one separately.  The output for each one is
written to the file specified by '-out'.  The mean (arithmetic or geometric)
of all the FWHMs along each axis is written to stdout.  (A non-positive
output value indicates something happened; e.g., FWHM in z is meaningless
for a 2D dataset.)

METHODS:
 - Calculate ratio of variance of first differences to data variance.
 - Should be the same as 3dFWHM for a 1-brick dataset.
   (But the output format is simpler to use in a script.)

OPTIONS:
  -mask mmm   = Use only voxels that are nonzero in dataset 'mmm'.
  -automask   = Compute a mask from THIS dataset, a la 3dAutomask.
                [Default = use all voxels]

  -input ddd }=
    *OR*     }= Use dataset 'ddd' as the input.
  -dset  ddd }=

  -demed      = If the input dataset has more than one sub-brick
                (e.g., has a time axis), then subtract the median
                of each voxel's time series before processing FWHM.
                This will tend to remove intrinsic spatial structure
                and leave behind the noise.
                [Default = don't do this]
  -unif       = If the input dataset has more than one sub-brick,
                then normalize each voxel's time series to have
                the same MAD before processing FWHM.  Implies -demed.
                [Default = don't do this]
  -detrend [q]= Instead of demed (0th order detrending), detrend to
                order 'q'.  If q is not given, the program picks q=NT/30.
                -detrend disables -demed, and includes -unif.
        **N.B.: I recommend this option, and it is not the default
                only for historical compatibility reasons.  It may
                become the default someday. Depending on my mood.
                It is already the default in program 3dBlurToFWHM.
        **N.B.: This is the same detrending as done in 3dDespike;
                using 2*q+3 basis functions for q > 0.
  -detprefix d= Save the detrended file into a dataset with prefix 'd'.
                Used mostly to figure out what the hell is going on,
                when funky results transpire.

  -geom      }= If the input dataset has more than one sub-brick,
    *OR*     }= compute the final estimate as the geometric mean
  -arith     }= or the arithmetic mean of the individual sub-brick
                FWHM estimates. [Default = -geom, for no good reason]

  -out ttt    = Write output to file 'ttt' (3 columns of numbers).
                If not given, the sub-brick outputs are not written.
                Use '-out -' to write to stdout, if desired.

  -compat     = Be compatible with the older 3dFWHM, where if a
                voxel is in the mask, then its neighbors are used
                for differencing, even if they are not themselves in
                the mask.  This was an error; now, neighbors must also
                be in the mask to be used in the differencing.
                Use '-compat' to use the older method.
              **NOT RECOMMENDED except for comparison purposes!

SAMPLE USAGE: (tcsh)
  set zork = ( `3dFWHMx -automask -input junque+orig` )
Captures the FWHM-x, FWHM-y, FWHM-z values into shell variable 'zork'.

INPUT FILE RECOMMENDATIONS:
For FMRI statistical purposes, you DO NOT want the FWHM to reflect
the spatial structure of the underlying anatomy.  Rather, you want
the FWHM to reflect the spatial structure of the noise.  This means
that the input dataset should not have anatomical structure.  One
good form of input is the output of '3dDeconvolve -errts', which is
the residuals left over after the GLM fitted signal model is subtracted
out from each voxel's time series.  If you don't want to go to that
trouble, use '-unif' to at least partially subtract out the anatomical
spatial structure, or use the output of 3dDetrend for the same purpose.

ALSO SEE:
 - The older program 3dFWHM is superseded by 3dFWHMx.
 - 3dLocalstat -stat FWHM will estimate the FWHM values at each
   voxel, using the same algorithm as this program but applied only
   to a local neighborhood of each voxel in turn.
 - 3dBlurToFWHM will blur a dataset to have a given global FWHM.

-- Emperor Zhark - Halloween 2006 --- BOO!

++ Compile date = Mar 13 2009




AFNI program: 3dFourier
3dFourier 
(c) 1999 Medical College of Wisconsin
by T. Ross and K. Heimerl
Version 0.8 last modified 8-17-99

Usage: 3dFourier [options] dataset

The paramters and options are:
	dataset		an afni compatible 3d+time dataset to be operated upon
	-prefix name	output name for new 3d+time dataset [default = fourier]
	-lowpass f 	low pass filter with a cutoff of f Hz
	-highpass f	high pass filter with a cutoff of f Hz
	-ignore n	ignore the first n images [default = 1]
	-retrend	Any mean and linear trend are removed before filtering.
			This will restore the trend after filtering.

Note that by combining the lowpass and highpass options, one can construct
bandpass and notch filters

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.



AFNI program: 3dFriedman
++ 3dFriedman: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs nonparametric Friedman test for               
randomized complete block design experiments.                     

Usage:                                                              
3dFriedman                                                          
-levels s                      s = number of treatments             
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixname                Friedman statistics are written      
                                 to file prefixname                 


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dGetrow
Program to extract 1 row from a dataset and write it as a .1D file
Usage: 3dGetrow [options] dataset

OPTIONS:
-------
Exactly ONE of the following three options is required:
 -xrow j k  = extract row along the x-direction at fixed y-index of j
              and fixed z-index of k.
 -yrow i k  = similar for a row along the y-direction
 -zrow i j  = similar for a row along the z-direction
 -input ddd = read input from dataset 'ddd'
              (instead of putting dataset name at end of command line)
 -output ff = filename for output .1D ASCII file will be 'ff'
              (if 'ff' is '-', then output is to stdout, the default)

N.B.: if the input dataset has more than one sub-brick, each
      sub-brick will appear as a separate column in the output file.

++ Compile date = Mar 13 2009




AFNI program: 3dIntracranial
++ 3dIntracranial: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. D. Ward
3dIntracranial - performs automatic segmentation of intracranial region.
                                                                        
   This program will strip the scalp and other non-brain tissue from a  
   high-resolution T1 weighted anatomical dataset.                      
                                                                        
** Nota Bene: the newer program 3dSkullStrip should also be considered  
**            for this functionality -- it usually works better.        
                                                                        
----------------------------------------------------------------------- 
                                                                        
Usage:                                                                  
-----                                                                   
                                                                        
3dIntracranial                                                          
   -anat filename   => Filename of anat dataset to be segmented         
                                                                        
   [-min_val   a]   => Minimum voxel intensity limit                    
                         Default: Internal PDF estimate for lower bound 
                                                                        
   [-max_val   b]   => Maximum voxel intensity limit                    
                         Default: Internal PDF estimate for upper bound 
                                                                        
   [-min_conn  m]   => Minimum voxel connectivity to enter              
                         Default: m=4                                   
                                                                        
   [-max_conn  n]   => Maximum voxel connectivity to leave              
                         Default: n=2                                   
                                                                        
   [-nosmooth]      => Suppress spatial smoothing of segmentation mask  
                                                                        
   [-mask]          => Generate functional image mask (complement)      
                         Default: Generate anatomical image            
                                                                        
   [-quiet]         => Suppress output to screen                        
                                                                        
   -prefix pname    => Prefix name for file to contain segmented image  
                                                                        
   ** NOTE **: The newer program 3dSkullStrip will probably give        
               better segmentation results than 3dIntracranial!         
----------------------------------------------------------------------- 
                                                                        
Examples:                                                               
--------                                                                
                                                                        
   3dIntracranial -anat elvis+orig -prefix elvis_strip                 
                                                                        
   3dIntracranial -min_val 30 -max_val 350 -anat elvis+orig -prefix strip
                                                                        
   3dIntracranial -nosmooth -quiet -anat elvis+orig -prefix elvis_strip 
                                                                        
----------------------------------------------------------------------- 

++ Compile date = Mar 13 2009




AFNI program: 3dInvFMRI
Usage: 3dInvFMRI [options]
Program to compute stimulus time series, given a 3D+time dataset
and an activation map (the inverse of the usual FMRI analysis problem).
-------------------------------------------------------------------
OPTIONS:

 -data yyy  =
   *OR*     = Defines input 3D+time dataset [a non-optional option].
 -input yyy =

 -map  aaa  = Defines activation map; 'aaa' should be a bucket dataset,
                each sub-brick of which defines the beta weight map for
                an unknown stimulus time series [also non-optional].

 -mapwt www = Defines a weighting factor to use for each element of
                the map.  The dataset 'www' can have either 1 sub-brick,
                or the same number as in the -map dataset.  In the
                first case, in each voxel, each sub-brick of the map
                gets the same weight in the least squares equations.
                  [default: all weights are 1]

 -mask mmm  = Defines a mask dataset, to restrict input voxels from
                -data and -map.  [default: all voxels are used]

 -base fff  = Each column of the 1D file 'fff' defines a baseline time
                series; these columns should be the same length as
                number of time points in 'yyy'.  Multiple -base options
                can be given.
 -polort pp = Adds polynomials of order 'pp' to the baseline collection.
                The default baseline model is '-polort 0' (constant).
                To specify no baseline model at all, use '-polort -1'.

 -out vvv   = Name of 1D output file will be 'vvv'.
                [default = '-', which is stdout; probably not good]

 -method M  = Determines the method to use.  'M' is a single letter:
               -method C = least squares fit to data matrix Y [default]
               -method K = least squares fit to activation matrix A

 -alpha aa  = Set the 'alpha' factor to 'aa'; alpha is used to penalize
                large values of the output vectors.  Default is 0.
                A large-ish value for alpha would be 0.1.

 -fir5     = Smooth the results with a 5 point lowpass FIR filter.
 -median5  = Smooth the results with a 5 point median filter.
               [default: no smoothing; only 1 of these can be used]
-------------------------------------------------------------------
METHODS:
 Formulate the problem as
    Y = V A' + F C' + errors
 where Y = data matrix      (N x M) [from -data]
       V = stimulus         (N x p) [to -out]
       A = map matrix       (M x p) [from -map]
       F = baseline matrix  (N x q) [from -base and -polort]
       C = baseline weights (M x q) [not computed]
       N = time series length = length of -data file
       M = number of voxels in mask
       p = number of stimulus time series to estimate
         = number of paramters in -map file
       q = number of baseline parameters
   and ' = matrix transpose operator
 Next, define matrix Z (Y detrended relative to columns of F) by
                       -1
   Z = [I - F(F'F)  F']  Y
-------------------------------------------------------------------
 The method C solution is given by
                 -1
   V0 = Z A [A'A]

 This solution minimizes the sum of squares over the N*M elements
 of the matrix   Y - V A' + F C'   (N.B.: A' means A-transpose).
-------------------------------------------------------------------
 The method K solution is given by
             -1                            -1
   W = [Z Z']  Z A   and then   V = W [W'W]

 This solution minimizes the sum of squares of the difference between
 the A(V) predicted from V and the input A, where A(V) is given by
                    -1
   A(V) = Z' V [V'V]   = Z'W
-------------------------------------------------------------------
 Technically, the solution is unidentfiable up to an arbitrary
 multiple of the columns of F (i.e., V = V0 + F G, where G is
 an arbitrary q x p matrix); the solution above is the solution
 that is orthogonal to the columns of F.

-- RWCox - March 2006 - purely for experimental purposes!

===================== EXAMPLE USAGE =====================================
** Step 1: From a training dataset, generate activation map.
  The input dataset has 4 runs, each 108 time points long.  3dDeconvolve
  is used on the first 3 runs (time points 0..323) to generate the
  activation map.  There are two visual stimuli (Complex and Simple).

  3dDeconvolve -x1D xout_short_two.1D -input rall_vr+orig'[0..323]'   \
      -num_stimts 2                                                   \
      -stim_file 1 hrf_complex.1D               -stim_label 1 Complex \
      -stim_file 2 hrf_simple.1D                -stim_label 2 Simple  \
      -concat '1D:0,108,216'                                          \
      -full_first -fout -tout                                         \
      -bucket func_ht2_short_two -cbucket cbuc_ht2_short_two

  N.B.: You may want to de-spike, smooth, and register the 3D+time
        dataset prior to the analysis (as usual).  These steps are not
        shown here -- I'm presuming you know how to use AFNI already.

** Step 2: Create a mask of highly activated voxels.
  The F statistic threshold is set to 30, corresponding to a voxel-wise
  p = 1e-12 = very significant.  The mask is also lightly clustered, and
  restricted to brain voxels.

  3dAutomask -prefix Amask rall_vr+orig
  3dcalc -a 'func_ht2_short+orig[0]' -b Amask+orig -datum byte \
         -nscale -expr 'step(a-30)*b' -prefix STmask300
  3dmerge -dxyz=1 -1clust 1.1 5 -prefix STmask300c STmask300+orig

** Step 3: Run 3dInvFMRI to estimate the stimulus functions in run #4.
  Run #4 is time points 324..431 of the 3D+time dataset (the -data
  input below).  The -map input is the beta weights extracted from
  the -cbucket output of 3dDeconvolve.

  3dInvFMRI -mask STmask300c+orig                       \
            -data rall_vr+orig'[324..431]'              \
            -map cbuc_ht2_short_two+orig'[6..7]'        \
            -polort 1 -alpha 0.01 -median5 -method K    \
            -out ii300K_short_two.1D

  3dInvFMRI -mask STmask300c+orig                       \
            -data rall_vr+orig'[324..431]'              \
            -map cbuc_ht2_short_two+orig'[6..7]'        \
            -polort 1 -alpha 0.01 -median5 -method C    \
            -out ii300C_short_two.1D

** Step 4: Plot the results, and get confused.

  1dplot -ynames VV KK CC -xlabel Run#4 -ylabel ComplexStim \
         hrf_complex.1D'{324..432}'                         \
         ii300K_short_two.1D'[0]'                           \
         ii300C_short_two.1D'[0]'

  1dplot -ynames VV KK CC -xlabel Run#4 -ylabel SimpleStim \
         hrf_simple.1D'{324..432}'                         \
         ii300K_short_two.1D'[1]'                          \
         ii300C_short_two.1D'[1]'

  N.B.: I've found that method K works better if MORE voxels are
        included in the mask (lower threshold) and method C if
        FEWER voxels are included.  The above threshold gave 945
        voxels being used to determine the 2 output time series.
=========================================================================

++ Compile date = Mar 13 2009




AFNI program: 3dKruskalWallis
++ 3dKruskalWallis: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs nonparametric Kruskal-Wallis test for         
comparison of multiple treatments.                                

Usage:                                                              
3dKruskalWallis                                                     
-levels s                      s = number of treatments             
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixnamename            Kruskal-Wallis statistics are written
                                 to file prefixname                 


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dLRflip
Usage: 3dLRflip [-LR|-AP|-IS|-X|-Y|-Z] [-prefix ppp] dataset dataset dataset ...
Flips the rows of a dataset along one of the three
axis.
This is useful in the case where you, or some fast
program, constructed a dataset with one of the 
directions incorrectly labeled. 
 Optional options:
 -LR | -AP | -IS: Axis about which to flip the data
                  Default is -LR.
      or
 -X | -Y | -Z: Flip about 1st, 2nd or 3rd directions,
               respectively. 
 Note: Only one of these 6 options can be  used at a time.
        
 -prefix ppp: Prefix to use for output. If you have 
              multiple datasets as input, you are better
              off letting the program choose a prefix for
              each output.


++ Compile date = Mar 13 2009




AFNI program: 3dLocalBistat
Usage: 3dLocalBistat [options] dataset1 dataset2

This program computes statistics between 2 datasets,
at each voxel, based on a local neighborhood of that voxel.
 - The neighborhood is defined by the '-nbhd' option.
 - Statistics to be calculated are defined by the '-stat' option(s).

OPTIONS
-------
 -nbhd 'nnn' = The string 'nnn' defines the region around each
               voxel that will be extracted for the statistics
               calculation.  The format of the 'nnn' string are:
               * 'SPHERE(r)' where 'r' is the radius in mm;
                 the neighborhood is all voxels whose center-to-
                 center distance is less than or equal to 'r'.
                 ** A negative value for 'r' means that the region
                    is calculated using voxel indexes rather than
                    voxel dimensions; that is, the neighborhood
                    region is a "sphere" in voxel indexes of
                    "radius" abs(r).
               * 'RECT(a,b,c)' is a rectangular block which
                 proceeds plus-or-minus 'a' mm in the x-direction,
                 'b' mm in the y-direction, and 'c' mm in the
                 z-direction.  The correspondence between the
                 dataset xyz axes and the actual spatial orientation
                 can be determined by using program 3dinfo.
                 ** A negative value for 'a' means that the region
                    extends plus-and-minus abs(a) voxels in the
                    x-direction, rather than plus-and-minus a mm.
                    Mutatis mutandum for negative 'b' and/or 'c'.
               * 'RHDD(r)' is a rhombic dodecahedron of 'radius' r.
               * 'TOHD(r)' is a truncated octahedron of 'radius' r.

 -stat sss   = Compute the statistic named 'sss' on the values
               extracted from the region around each voxel:
               * pearson  = Pearson correlation coefficient
               * spearman = Spearman correlation coefficient
               * quadrant = Quadrant correlation coefficient
               * mutinfo  = Mutual Information
               * normuti  = Normalized Mutual Information
               * jointent = Joint entropy
               * hellinger= Hellinger metric
               * crU      = Correlation ratio (Unsymmetric)
               * crM      = Correlation ratio (symmetrized by Multiplication)
               * crA      = Correlation ratio (symmetrized by Addition)
               * num    = number of the values in the region:
                          with the use of -mask or -automask,
                          the size of the region around any given
                          voxel will vary; this option lets you
                          map that size.
               * ncd    = Normalized Compression Distance (zlib; very slow)
               * ALL    = all of the above, in that order
               More than one '-stat' option can be used.

 -mask mset  = Read in dataset 'mset' and use the nonzero voxels
               therein as a mask.  Voxels NOT in the mask will
               not be used in the neighborhood of any voxel. Also,
               a voxel NOT in the mask will have its statistic(s)
               computed as zero (0).
 -automask   = Compute the mask as in program 3dAutomask.
               -mask and -automask are mutually exclusive: that is,
               you can only specify one mask.
 -weight ws  = Use dataset 'ws' as a weight.  Only applies to 'pearson'.

 -prefix ppp = Use string 'ppp' as the prefix for the output dataset.
               The output dataset is always stored as floats.

ADVANCED OPTIONS
----------------
 -histpow pp   = By default, the number of bins in the histogram used
                 for calculating the Hellinger, Mutual Information, NCD,
                 and Correlation Ratio statistics is n^(1/3), where n
                 is the number of data points in the -nbhd mask.  You
                 can change that exponent to 'pp' with this option.
 -histbin nn   = Or you can just set the number of bins directly to 'nn'.
 -hclip1 a b   = Clip dataset1 to lie between values 'a' and 'b'.  If 'a'
                 and 'b' end in '%', then these values are percentage
                 points on the cumulative histogram.
 -hclip2 a b   = Similar to '-hclip1' for dataset2.

-----------------------------
Author: RWCox - October 2006.

++ Compile date = Mar 13 2009




AFNI program: 3dLocalCormat
Usage: 3dLocalCORMAT [options] inputdataset

Compute the correlation matrix (in time) of the input dataset,
up to lag given by -maxlag.  The matrix is averaged over the
neighborhood specified by the -nbhd option, and then the entries
are output at each voxel in a new dataset.

Normally, the input to this program would be the -errts output
from 3dDeconvolve, or the equivalent residuals from some other
analysis.  If you input a non-residual time series file, you at
least should use an appropriate -polort level for detrending!

Options:
  -input inputdataset
  -prefix ppp
  -mask mset    {these 2 options are}
  -automask     {mutually exclusive.}
  -nbhd nnn     [e.g., 'SPHERE(9)' for 9 mm radius]
  -polort ppp   [default = 0, which is reasonable for -errts output]
  -concat ccc   [as in 3dDeconvolve]
  -maxlag mmm   [default = 10]
  -ARMA         [estimate ARMA(1,1) parameters into last 2 sub-bricks]

A quick hack for my own benignant purposes -- RWCox -- June 2008

++ Compile date = Mar 13 2009




AFNI program: 3dLocalSVD
Usage: 3dLocalSVD [options] inputdataset
You probably want to use 3dDetrend before running this program!

Options:
 -mask mset
 -automask
 -prefix ppp
 -input inputdataset
 -nbhd nnn
 -vmean
 -vnorm

++ Compile date = Mar 13 2009




AFNI program: 3dLocalstat
Usage: 3dLocalstat [options] dataset

This program computes statistics at each voxel, based on a
local neighborhood of that voxel.
 - The neighborhood is defined by the '-nbhd' option.
 - Statistics to be calculated are defined by the '-stat' option(s).

OPTIONS
-------
 -nbhd 'nnn' = The string 'nnn' defines the region around each
               voxel that will be extracted for the statistics
               calculation.  The format of the 'nnn' string are:
               * 'SPHERE(r)' where 'r' is the radius in mm;
                 the neighborhood is all voxels whose center-to-
                 center distance is less than or equal to 'r'.
                 ** A negative value for 'r' means that the region
                    is calculated using voxel indexes rather than
                    voxel dimensions; that is, the neighborhood
                    region is a "sphere" in voxel indexes of
                    "radius" abs(r).
               * 'RECT(a,b,c)' is a rectangular block which
                 proceeds plus-or-minus 'a' mm in the x-direction,
                 'b' mm in the y-direction, and 'c' mm in the
                 z-direction.  The correspondence between the
                 dataset xyz axes and the actual spatial orientation
                 can be determined by using program 3dinfo.
                 ** A negative value for 'a' means that the region
                    extends plus-and-minus abs(a) voxels in the
                    x-direction, rather than plus-and-minus a mm.
                    Mutatis mutandum for negative 'b' and/or 'c'.
               * 'RHDD(a)' where 'a' is the size parameter in mm;
                 this is Kepler's rhombic dodecahedron [volume=2*a^3].
               * 'TOHD(a)' where 'a' is the size parameter in mm;
                 this is a truncated octahedron. [volume=4*a^3]
                 ** This is the polyhedral shape that tiles space
                    and is the most 'sphere-like'.
               * If no '-nbhd' option is given, the region extracted
                 will just be the voxel and its 6 nearest neighbors.
               * Voxels not in the mask (if any) or outside the
                 dataset volume will not be used.  This means that
                 different output voxels will have different numbers
                 of input voxels that went into calculating their
                 statistics.  The 'num' statistic can be used to
                 get this count on a per-voxel basis, if you need it.

 -stat sss   = Compute the statistic named 'sss' on the values
               extracted from the region around each voxel:
               * mean   = average of the values
               * stdev  = standard deviation
               * var    = variance (stdev*stdev)
               * cvar   = coefficient of variation = stdev/fabs(mean)
               * median = median of the values
               * MAD    = median absolute deviation
               * min    = minimum
               * max    = maximum
               * absmax = maximum of the absolute values
               * num    = number of the values in the region:
                          with the use of -mask or -automask,
                          the size of the region around any given
                          voxel will vary; this option lets you
                          map that size.  It may be useful if you
                          plan to compute a t-statistic (say) from
                          the mean and stdev outputs.
               * sum    = sum of the values in the region:
               * FWHM   = compute (like 3dFWHM) image smoothness
                          inside each voxel's neighborhood.  Results
                          are in 3 sub-bricks: FWHMx, FHWMy, and FWHMz.
                          Places where an output is -1 are locations
                          where the FWHM value could not be computed
                          (e.g., outside the mask).
               * FWHMbar= Compute just the average of the 3 FWHM values
                          (normally would NOT do this with FWHM also).
               * perc:P0:P1:Pstep = 
                          Compute percentiles between P0 and P1 with a 
                          step of Pstep.
                          Default P1 is equal to P0 and default P2 = 1
               * ALL    = all of the above, in that order 
                         (except FWHMbar and perc).
               More than one '-stat' option can be used.

 -mask mset  = Read in dataset 'mset' and use the nonzero voxels
               therein as a mask.  Voxels NOT in the mask will
               not be used in the neighborhood of any voxel. Also,
               a voxel NOT in the mask will have its statistic(s)
               computed as zero (0).
 -automask   = Compute the mask as in program 3dAutomask.
               -mask and -automask are mutually exclusive: that is,
               you can only specify one mask.

 -prefix ppp = Use string 'ppp' as the prefix for the output dataset.
               The output dataset is normally stored as floats.

 -datum type = Coerce the output data to be stored as the given type, 
               which may be byte, short, or float.
               Default is float

 -quiet      = Stop the highly informative progress reports.

Author: RWCox - August 2005.  Instigator: ZSSaad.

++ Compile date = Mar 13 2009




AFNI program: 3dMINCtoAFNI
Usage: 3dMINCtoAFNI [-prefix ppp] dataset.mnc
Reads in a MINC formatted file and writes it out as an
AFNI dataset file pair with the given prefix.  If the
prefix option isn't used, the input filename will be
used, after the '.mnc' is chopped off.

NOTES:
* Setting environment variable AFNI_MINC_FLOATIZE to Yes
   will cause MINC datasets to be converted to floats on
   input.  Otherwise, they will be kept in their 'native'
   data type if possible, which may cause problems with
   scaling on occasion.
* The TR recorded in MINC files is often incorrect.  You may
   need to fix this (or other parameters) using 3drefit.

++ Compile date = Mar 13 2009




AFNI program: 3dMannWhitney
++ 3dMannWhitney: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs nonparametric Mann-Whitney two-sample test. 

Usage: 
3dMannWhitney 
-dset 1 filename               data set for X observations          
 . . .                           . . .                              
-dset 1 filename               data set for X observations          
-dset 2 filename               data set for Y observations          
 . . .                           . . .                              
-dset 2 filename               data set for Y observations          
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixname                estimated population delta and       
                                 Wilcoxon-Mann-Whitney statistics   
                                 written to file prefixname         


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dMean
Usage: 3dMean [options] dset dset ...
Takes the voxel-by-voxel mean of all input datasets;
the main reason is to be faster than 3dcalc.

Options [see 3dcalc -help for more details on these]:
  -verbose    = Print out some information along the way.
  -prefix ppp = Sets the prefix of the output dataset.
  -datum ddd  = Sets the datum of the output dataset.
  -fscale     = Force scaling of the output to the maximum integer range.
  -gscale     = Same as '-fscale', but also forces each output sub-brick to
                  to get the same scaling factor.
  -nscale     = Don't do any scaling on output to byte or short datasets.

  -sd *OR*    = Calculate the standard deviation (variance/n-1) instead
  -stdev         of the mean (cannot be used with -sqr or -sum).

  -sqr        = Average the squares, instead of the values.
  -sum        = Just take the sum (don't divide by number of datasets).

N.B.: All input datasets must have the same number of voxels along
       each axis (x,y,z,t).
    * At least 2 input datasets are required.
    * Dataset sub-brick selectors [] are allowed.
    * The output dataset origin, time steps, etc., are taken from the
       first input dataset.

++ Compile date = Mar 13 2009




AFNI program: 3dMedianFilter
Usage: 3dMedianFilter [options] dataset
Computes the median in a spherical nbhd around each point in the
input to produce the output.

Options:
  -irad x    = Radius in voxels of spherical regions
  -iter n    = Iterate 'n' times [default=1]
  -verb      = Be verbose during run
  -prefix pp = Use 'pp' for prefix of output dataset
  -automask  = Create a mask (a la 3dAutomask)

Output dataset is always stored in float format.  If the input
dataset has more than 1 sub-brick, only sub-brick #0 is processed.

-- Feb 2005 - RWCox

++ Compile date = Mar 13 2009




AFNI program: 3dNLfim
++ 3dNLfim: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program calculates a nonlinear regression for each voxel of the  
input AFNI 3d+time data set.  The nonlinear regression is calculated  
by means of a least squares fit to the signal plus noise models which 
are specified by the user.                                            
                                                                      
Usage:                                                                
3dNLfim                                                               
-input fname       fname = filename of 3d + time data file for input  
[-mask mset]       Use the 0 sub-brick of dataset 'mset' as a mask    
                     to indicate which voxels to analyze (a sub-brick 
                     selector is allowed)  [default = use all voxels] 
[-ignore num]      num   = skip this number of initial images in the  
                     time series for regresion analysis; default = 0  
               ****N.B.: default ignore value changed from 3 to 0,    
                         on 04 Nov 2008 (BHO day).                    
[-inTR]            set delt = TR of the input 3d+time dataset         
                     [The default is to compute with delt = 1.0 ]     
                     [The model functions are calculated using a      
                      time grid of: 0, delt, 2*delt, 3*delt, ... ]    
[-TR delt]         directly set the TR of the time series model;      
                     can be useful if the input file is a .1D file    
                     (transposed with the \' operator)               
[-time fname]      fname = ASCII file containing each time point      
                     in the time series. Defaults to even spacing     
                     given by TR (this option overrides -inTR).       
-signal slabel     slabel = name of (non-linear) signal model         
-noise  nlabel     nlabel = name of (linear) noise model              
-sconstr k c d     constraints for kth signal parameter:              
                      c <= gs[k] <= d                                 
                 **N.B.: It is important to set the parameter         
                         constraints with care!                       
                 **N.B.: -sconstr and -nconstr options must appear    
                         AFTER -signal and -noise on the command line 
-nconstr k c d     constraints for kth noise parameter:               
                      c+b[k] <= gn[k] <= d+b[k]                       
[-nabs]            use absolute constraints for noise parameters:     
                     c <= gn[k] <= d  [default=relative, as above]    
[-nrand n]         n = number of random test points [default=19999]      
[-nbest b]         b = use b best test points to start [default=9]   
[-rmsmin r]        r = minimum rms error to reject reduced model      
[-fdisp fval]      display (to screen) results for those voxels       
                     whose f-statistic is > fval [default=999.0]       
[-progress ival]   display (to screen) results for those voxels       
                     every ival number of voxels                      
[-voxel_count]     display (to screen) the current voxel index        
                                                                      
--- These options choose the least-square minimization algorithm ---  
                                                                      
[-SIMPLEX]         use Nelder-Mead simplex method [default]           
[-POWELL]          use Powell's NEWUOA method instead of the          
                     Nelder-Mead simplex method to find the           
                     nonlinear least-squares solution                 
                     [slower; usually more accurate, but not always!] 
[-BOTH]            use both Powell's and Nelder-Mead method           
                     [slowest, but should be most accurate]           
                                                                      
--- These options generate individual AFNI 2 sub-brick datasets ---   
--- [All these options must be AFTER options -signal and -noise]---   
                                                                      
[-freg fname]      perform f-test for significance of the regression; 
                     output 'fift' is written to prefix filename fname
[-frsqr fname]     calculate R^2 (coef. of multiple determination);   
                     store along with f-test for regression;          
                     output 'fift' is written to prefix filename fname
[-fsmax fname]     estimate signed maximum of signal; store along     
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-ftmax fname]     estimate time of signed maximum; store along       
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-fpsmax fname]    calculate (signed) maximum percentage change of    
                     signal from baseline; output 'fift' is           
                     written to prefix filename fname                 
[-farea fname]     calculate area between signal and baseline; store  
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-fparea fname]    percentage area of signal relative to baseline;    
                     store with f-test for regression; output 'fift'  
                     is written to prefix filename fname              
[-fscoef k fname]  estimate kth signal parameter gs[k]; store along   
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-fncoef k fname]  estimate kth noise parameter gn[k]; store along    
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-tscoef k fname]  perform t-test for significance of the kth signal  
                     parameter gs[k]; output 'fitt' is written        
                     to prefix filename fname                         
[-tncoef k fname]  perform t-test for significance of the kth noise   
                     parameter gn[k]; output 'fitt' is written        
                     to prefix filename fname                         
                                                                      
--- These options generate one AFNI 'bucket' type dataset ---         
                                                                      
[-bucket n prefixname]   create one AFNI 'bucket' dataset containing  
                           n sub-bricks; n=0 creates default output;  
                           output 'bucket' is written to prefixname   
The mth sub-brick will contain:                                       
[-brick m scoef k label]   kth signal parameter regression coefficient
[-brick m ncoef k label]   kth noise parameter regression coefficient 
[-brick m tmax label]      time at max. abs. value of signal          
[-brick m smax label]      signed max. value of signal                
[-brick m psmax label]     signed max. value of signal as percent     
                             above baseline level                     
[-brick m area label]      area between signal and baseline           
[-brick m parea label]     signed area between signal and baseline    
                             as percent of baseline area              
[-brick m tscoef k label]  t-stat for kth signal parameter coefficient
[-brick m tncoef k label]  t-stat for kth noise parameter coefficient 
[-brick m resid label]     std. dev. of the full model fit residuals  
[-brick m rsqr  label]     R^2 (coefficient of multiple determination)
[-brick m fstat label]     F-stat for significance of the regression  

[-noFDR]                   Don't write the FDR (q vs. threshold)
                           curves into the output dataset.
                           (Same as 'setenv AFNI_AUTOMATIC_FDR NO')
                                                                      
     --- These options write time series fit for ---                  
     --- each voxel to an AFNI 3d+time dataset   ---                  
                                                                      
[-sfit fname]      fname = prefix for output 3d+time signal model fit 
[-snfit fname]     fname = prefix for output 3d+time signal+noise fit 
                                                                      

 -jobs J   Run the program with 'J' jobs (sub-processes).
             On a multi-CPU machine, this can speed the
             program up considerably.  On a single CPU
             machine, using this option is silly.
             J should be a number from 1 up to the
             number of CPU sharing memory on the system.
             J=1 is normal (single process) operation.
             The maximum allowed value of J is 32.
         * For more information on parallelizing, see
             http://afni.nimh.nih.gov/afni/doc/misc/parallize.html
         * Use -mask to get more speed; cf. 3dAutomask.

----------------------------------------------------------------------
Signal Models (see the appropriate model_*.c file for exact details) :

  Null                     : No Signal
                             (no parameters)
                             see model_null.c

  SineWave_AP              : Sinusoidal Response
                             (amplitude, phase)
                             see model_sinewave_ap.c

  SquareWave_AP            : Square Wave Response
                             (amplitude, phase)
                             see model_squarewave_ap.c

  TrnglWave_AP             : Triangular Wave Response
                             (amplitude, phase)
                             see model_trnglwave_ap.c

  SineWave_APF             : Sinusoidal Wave Response
                             (amplitude, phase, frequency)
                             see model_sinewave_apf.c

  SquareWave_APF           : Sinusoidal Wave Response
                             (amplitude, phase, frequency)
                             see model_squarewave_apf.c

  TrnglWave_APF            : Sinusoidal Wave Response
                             (amplitude, phase, frequency)
                             see model_trnglwave_apf.c

  Exp                      : Exponential Function
                             (a,b): a * exp(b * t)
                             see model_exp.c

  DiffExp                  : Differential-Exponential Drug Response
                             (t0, k, alpha1, alpha2)
                             see model_diffexp.c

  GammaVar                 : Gamma-Variate Function Drug Response
                             (t0, k, r, b)
                             see model_gammavar.c

  Beta                     : Beta Distribution Model
                             (t0, tf, k, alpha, beta)
                             see model_beta.c

  ConvGamma2a              : Gamma Convolution with 2 Input Time Series
                             (t0, r, b)
                             see model_convgamma2a.c

  ConvGamma                : Gamma Vairate Response Model
                             (t0, amp, r, b)
                             see model_convgamma.c

  demri_3                  : Dynamic (contrast) Enhanced MRI
                             (K_trans, Ve, k_ep)
                             see model_demri_3.c
                  for help : setenv AFNI_MODEL_HELP_DEMRI_3 YES

  ADC                      : Diffusion Signal Model
                             (So, D)
                             see model_diffusion.c

  michaelis_menton         : Michaelis/Menten Concentration Model
                             (v, vmax, k12, k21, mag)
                             see model_michaelis_menton.c

  Expr2                    : generic (3dcalc-like) expression with
                             exactly 2 'free' parameters and using
                             symbol 't' as the time variable;
                             see model_expr2.c for details.

----------------------------------------
Noise Models (see the appropriate model_*.c file for exact details) :

  Zero                     : Zero Noise Model
                             (no parameters)
                             see model_zero.c

  Constant                 : Constant Noise Model
                             (constant)
                             see model_constant.c

  Linear                   : Linear Noise Model
                             (constant, linear)
                             see model_linear.c

  Linear+Ort               : Linear+Ort Noise Model
                             (constant, linear, Ort)
                             see model_linplusort.c

  Quadratic                : Quadratic Noise Model
                             (constant, linear, quadratic)
                             see model_quadratic.c

++ Compile date = Mar 13 2009




AFNI program: 3dNotes
Program: 3dNotes 
Author:  T. Ross 
(c)1999 Medical College of Wisconsin 
                                                                        
3dNotes - a program to add, delete and show notes for AFNI datasets.    
 
----------------------------------------------------------------------- 
                                                                        
Usage: 3dNotes [-a "string"] [-h "string"][-d num] [-help] dataset  
 
Examples: 
 
3dNotes -a      "Subject sneezed in scanner, Aug 13 2004" elvis+orig     
3dNotes -h      "Subject likes fried PB & banana sandwiches" elvis+orig  
3dNotes -HH     "Subject has left the building" elvis+orig              
3dNotes -d 2 -h "Subject sick of PB'n'banana sandwiches" elvis+orig  
 
----------------------------------------------------------------------- 
                                                                        
Explanation of Options:
---------------------- 
   dataset       : AFNI compatible dataset [required].
                                                                        
   -a   "str"  : Add the string "str" to the list of notes.
                                                                        
                   Note that you can use the standard C escape codes,
                   \n for newline \t for tab, etc.
                                                                        
   -h   "str"   : Append the string "str" to the dataset's history.  This
                    can only appear once on the command line.  As this is
                    added to the history, it cannot easily be deleted. But,
                    history is propagated to the children of this dataset.
                                                                        
   -HH  "str"   : Replace any existing history note with "str".  This 
                    line cannot be used with '-h'.
                                                                        
   -d   num       : deletes note number num.
                                                                        
   -ses           : Print to stdout the expanded notes.                 
                                                                        
   -help          : Displays this screen.
                                                                        
                                                                        
The default action, with no options, is to display the notes for the
dataset.  If there are options, all deletions occur first and essentially
simultaneously.  Then, notes are added in the order listed on the command
line.  If you do something like -d 10 -d 10, it will delete both notes 10
and 11.  Don't do that.


++ Compile date = Mar 13 2009




AFNI program: 3dOverlap
Usage: 3dOverlap [options] dset1 dset2 ...
Output = count of number of voxels that are nonzero in ALL
         of the input dataset sub-bricks
The result is simply a number printed to stdout.  (If a single
brick was input, this is just the count of the number of nonzero
voxels in that brick.)
Options:
  -save ppp = Save the count of overlaps at each voxel into a
              dataset with prefix 'ppp' (properly thresholded,
              this could be used as a mask dataset).
Example:
  3dOverlap -save abcnum a+orig b+orig c+orig
  3dmaskave -mask 'abcnum+orig<3..3>' a+orig

++ Compile date = Mar 13 2009




AFNI program: 3dPAR2AFNI.pl
Unknown option: e
Unknown option: l
Unknown option: p
3dPAR2ANFI
Version: 2008/07/18 11:12

Command line Options:
-h     This help message.
-v     Be verbose in operation.
-s     Skip the outliers test when converting 4D files
       The default is to perform the outliers test.
-n     Output NIfTI files instead of HEAD/BRIK.
       The default is create HEAD/BRIK files.
-a     Output ANALYZE files instead of HEAD/BRIK.
-o     The name of the directory where the created files should be
       placed.  If this directory does not exist the program exits
       without performing any conversion.
       The default is to place created files in the same directory
       as the PAR files.
-g     Gzip the files created.
       The default is not to gzip the files.
-2     2-Byte-swap the files created.
       The default is not to 2 byte-swap.
-4     4-Byte-swap the files created.
       The default is not to 4 byte-swap.

Sample invocations:
3dPAR2AFNI subject1.PAR
	Converts the file subject1.PAR file to subject1+orig.{HEAD,BRIK}
3dPAR2AFNI -s subject1.PAR
       Same as above but skip the outlier test
3dPAR2AFNI -n subject1.PAR
       Converts the file subject1.PAR file to subject1.nii
3dPAR2AFNI -n -s subject1.PAR
       Same as above but skip the outlier test
3dPAR2AFNI -n -s -o ~/tmp subject1.PAR
       Same as above but skip the outlier test and place the
       created NIfTI files in ~/tmp
3dPAR2AFNI -n -s -o ~/tmp *.PAR
       Converts all the PAR/REC files in the current directory to
       NIfTI files, skip the outlier test and place the created
       NIfTI files in ~/tmp



AFNI program: 3dREMLfit
Usage: 3dREMLfit [options]

Least squares time series fit, with REML estimation of the
temporal auto-correlation structure.

* This program provides a generalization of 3dDeconvolve:
    it allows for serial correlation in the time series noise.
* It solves the linear equations for each voxel in the generalized
    (prewhitened) least squares sense, using the REML estimation method
    to find a best-fit ARMA(1,1) model for the time series noise
    correlation matrix in each voxel.
* You must run 3dDeconvolve first to generate the input matrix
    (.xmat.1D) file, which contains the hemodynamic regression
    model, censoring and catenation information, the GLTs, etc.
* If you don't want the 3dDeconvolve analysis to run, you can
    prevent that by using 3dDeconvolve's '-x1D_stop' option.
* 3dDeconvolve also prints out a cognate command line for running
    3dREMLfit, which should get you going with relative ease.
* The output datasets from 3dREMLfit are structured to resemble
    the corresponding results from 3dDeconvolve, to make it
    easy to adapt your scripts for further processing.
* Is this type of analysis important?
    That depends on your point of view, your data, and your goals.
    If you really want to know the answer, you should run
    your analyses both ways (with 3dDeconvolve and 3dREMLfit),
    through to the final step (e.g., group analysis), and then
    decide if your neuroscience/brain conclusions depend strongly
    on the type of linear regression that was used.

-------------------------------------------
Input Options (the first two are mandatory)
-------------------------------------------
 -input ddd  = Read time series dataset 'ddd'.

 -matrix mmm = Read the matrix 'mmm', which should have been
                 output from 3dDeconvolve via the '-x1D' option.
            *** N.B.: 3dREMLfit will NOT work with all zero columns,
                      unlike 3dDeconvolve!

 -mask kkk   = Read dataset 'kkk' as a mask for the input.
 -automask   = If you don't know what this does by now, I'm not telling.

-----------------------------------------------
Options to Add Columns to the Regression Matrix
-----------------------------------------------
 -addbase bb = You can add baseline model columns to the matrix with
                 this option.  Each column in the .1D file 'bb' will
                 be appended to the matrix.  This file must have at
                 least as many rows as the matrix does.
              * Multiple -addbase options can be used, if needed.
              * More than 1 file can be specified, as in
                  -addbase fred.1D ethel.1D elvis.1D
              * No .1D filename can start with the '-' character.
              * If the matrix from 3dDeconvolve was censored, then
                  this file (and '-slibase' files) can either be
                  censored to match, OR 3dREMLfit will censor these
                  .1D files for you.
               + If the column length (number of rows) of the .1D file
                   is the same as the column length of the censored
                   matrix, then the .1D file WILL NOT be censored.
               + If the column length of the .1D file is the same
                   as the column length of the uncensored matrix,
                   then the .1D file WILL be censored -- the same
                   rows excised from the matrix in 3dDeconvolve will
                   be resected from the .1D file before the .1D file's
                   columns are appended to the matrix.
               + The censoring information from 3dDeconvolve is stored
                   in the matrix file header, and you don't have to
                   provide it again on the 3dREMLfit command line!

 -slibase bb = Similar to -addbase in concept, BUT each .1D file 'bb'
                 must have an integer multiple of the number of slices
                 in the input dataset; then, separate regression
                 matrices are generated for each slice, with the
                 [0] column of 'bb' appended to the matrix for
                 the #0 slice of the dataset, the [1] column of 'bb'
                 appended to the matrix for the #1 slice of the dataset,
                 and so on.  For example, if the dataset has 3 slices
                 and file 'bb' has 6 columns, then the order of use is
                     bb[0] --> slice #0 matrix
                     bb[1] --> slice #1 matrix
                     bb[2] --> slice #2 matrix
                     bb[3] --> slice #0 matrix
                     bb[4] --> slice #1 matrix
                     bb[5] --> slice #2 matrix
              * Intended to help model physiological noise in FMRI,
                 or other effects you want to regress out that might
                 change significantly in the inter-slice time intervals.
              * Slices are the 3rd dimension in the dataset storage
                 order -- 3dinfo can tell you what that direction is:
                   Data Axes Orientation:
                     first  (x) = Right-to-Left
                     second (y) = Anterior-to-Posterior
                     third  (z) = Inferior-to-Superior   [-orient RAI]
                 In the above example, the slice direction is from
                 Inferior to Superior, so the columns in the '-slibase'
                 input file should be ordered in that direction as well.
              * Will slow the program down, and make it use a
                  lot more memory (to hold all the matrix stuff).
            *** At this time, 3dSynthesize has no way of incorporating
                  the extra baseline timeseries from -addbase or -slibase.

 -usetemp    = Write intermediate stuff to disk, to economize on RAM.
                 Using this option might be necessary to run with
                 '-slibase' and with '-Grid' values above the default,
                 since the program has to store a large number of
                 matrices for such a problem: two for every slice and
                 for every (a,b) pair in the ARMA parameter grid.
              * '-usetemp' can actually speed the program up, interestingly,
                   even if you have enough RAM to hold all the intermediate
                   matrices needed with '-slibase'.  YMMV.
              * '-usetemp' also writes temporary files to store dataset
                   results, which can help if you are creating multiple large
                   dataset (e.g., -Rfitts and -Rerrts in the same program run).
              * Temporary files are written to the directory given
                  in environment variable TMPDIR, or in /tmp, or in ./
                  (preference is in that order).
                 + If the program crashes, these files are named
                     REML_somethingrandom, and you might have to
                     delete them manually.
                 + If the program ends normally, it will delete
                     these temporary files before it exits.
                 + Several gigabytes of disk space might be used
                     for this temporary storage!
              * If the program crashes with a 'malloc failure' type of
                  message, then try '-usetemp' (malloc=memory allocator).
              * If you use '-verb', then memory usage is printed out
                  at various points along the way.

 -nodmbase   = By default, baseline columns added to the matrix
                 via '-addbase' or '-slibase' will each have their
                 mean removed (as is done in 3dDeconvolve).  If you
                 do NOT want this operation performed, use '-nodmbase'.
              * Using '-nodmbase' would make sense if you used
                 '-polort -1' to set up the matrix 3dDeconvolve, and/or
                 you actually care about the fit coefficients of the
                 extra baseline columns.

------------------------------------------------------------------------
Output Options (at least one must be given; 'ppp' = dataset prefix name)
------------------------------------------------------------------------
 -Rvar  ppp  = dataset for REML variance parameters
 -Rbeta ppp  = dataset for beta weights from the REML estimation
                 (similar to the -cbucket output from 3dDeconvolve)
 -Rbuck ppp  = dataset for beta + statistics from the REML estimation;
                 also contains the results of any GLT analysis requested
                 in the 3dDeconvolve setup.
                 (similar to the -bucket output from 3dDeconvolve)
 -Rglt  ppp  = dataset for beta + statistics from the REML estimation,
                 but ONLY for the GLTs added on the 3dREMLfit command
                 line itself via '-gltsym'; GLTs from 3dDeconvolve's
                 command line will NOT be included.
               * Intended to give an easy way to get extra contrasts
                   after an earlier 3dREMLfit run.
               * Use with '-ABfile vvv' to read the (a,b) parameters
                   from the earlier run, where 'vvv' is the '-Rvar'
                   dataset output from that run.

 -fout       = put F-statistics into the bucket dataset
 -rout       = put R^2 statistics into the bucket dataset
 -tout       = put t-statistics into the bucket dataset
                 (if you use -Rbuck and do not give any of -fout, -tout,)
                 (or -rout, then the program assumes -fout is activated.)
 -noFDR      = do NOT add FDR curve data to bucket datasets
                 (FDR curves can take a long time if -tout is used)

 -Rfitts ppp = dataset for REML fitted model
                 (like 3dDeconvolve, a censored time point gets)
                 (the actual data values from that time index!!)

 -Rerrts ppp = dataset for REML residuals = data - fitted model
                 (like 3dDeconvolve,  a censored time)
                 (point gets its residual set to zero)
 -Rwherr ppp = dataset for REML residual, whitened using the
                 estimated ARMA(1,1) correlation matrix of the noise
                 [Note that the whitening matrix used is the inverse  ]
                 [of the Choleski factor of the correlation matrix C; ]
                 [however, the whitening matrix isn't uniquely defined]
                 [(any matrix W with C=inv(W'W) will work), so other  ]
                 [whitening schemes could be used and these would give]
                 [different whitened residual time series datasets.   ]

 -gltsym g h = read a symbolic GLT from file 'g' and label it with
                 string 'h'
                * As in 3dDeconvolve, you can also use the 'SYM:' method
                    to put the definition of the GLT directly on the
                    command line.
                * The symbolic labels for the stimuli are as provided
                    in the matrix file, from 3dDeconvolve.
              *** Unlike 3dDeconvolve, you supply the label 'h' for
                    the output coefficients and statistics directly
                    after the matrix specification 'g'.
                * Like 3dDeconvolve, the matrix generated by the
                    symbolic expression will be printed to the screen
                    unless environment variable AFNI_GLTSYM_PRINT is NO.
                * These GLTs are in addition to those stored in the
                    matrix file, from 3dDeconvolve.
                * If you don't create a bucket dataset using one of
                    -Rbuck or -Rglt (or -Obuck / -Oglt), using
                    -gltsym is completely pointless!
               ** Besides the stimulus labels read from the matrix
                    file (put there by 3dDeconvolve), you can refer
                    to regressor columns in the matrix using the
                    symbolic name 'Col', which collectively means
                    all the columns in the matrix.  'Col' is a way
                    to test '-addbase' and/or '-slibase' regressors
                    for significance; for example, if you have a
                    matrix with 10 columns from 3dDeconvolve and
                    add 2 extra columns to it, then you could use
                      -gltsym 'SYM: Col[[10..11]]' Addons -tout -fout
                    to create a GLT to include both of the added
                    columns (numbers 10 and 11).

The options below let you get the Ordinary Least SQuares outputs
(without adjustment for serial correlation), for comparisons.
These datasets should be essentially identical to the results
you would get by running 3dDeconvolve (with the '-float' option!):

 -Ovar   ppp = dataset for OLSQ st.dev. parameter (kind of boring)
 -Obeta  ppp = dataset for beta weights from the OLSQ estimation
 -Obuck  ppp = dataset for beta + statistics from the OLSQ estimation
 -Oglt   ppp = dataset for beta + statistics from '-gltsym' options
 -Ofitts ppp = dataset for OLSQ fitted model
 -Oerrts ppp = dataset for OLSQ residuals (data - fitted model)
                 (there is no -Owherr option; if you don't)
                 (see why, then think about it for a while)

Note that you don't have to use any of the '-R' options; you could
use 3dREMLfit just for the '-O' options if you want.  In that case,
the program will skip the time consuming ARMA(1,1) estimation for
each voxel, by pretending you used the option '-ABfile =0,0'.

-------------------------------------------------------------------
The following options control the ARMA(1,1) parameter estimation
for each voxel time series; normally, you do not need these options
-------------------------------------------------------------------
 -MAXa am   = Set max allowed AR a parameter to 'am' (default=0.8).
                The range of a values scanned is   0 .. +am (-POScor)
                                           or is -am .. +am (-NEGcor).

 -MAXb bm   = Set max allow MA b parameter to 'bm' (default=0.8).
                The range of b values scanned is -bm .. +bm.
               * The largest value allowed for am and bm is 0.9.
               * The smallest value allowed for am and bm is 0.1.
               * For a nearly pure AR(1) model, use '-MAXb 0.1'
               * For a nearly pure MA(1) model, use '-MAXa 0.1'

 -Grid pp   = Set the number of grid divisions in the (a,b) grid
                to be 2^pp in each direction over the range 0..MAX.
                The default (and minimum) value for 'pp' is 3.
                Larger values will provide a finer resolution
                in a and b, but at the cost of some CPU time.
               * To be clear, the default settings use a grid
                   with 8 divisions in the a direction and 16 in
                   the b direction (since a is non-negative but
                   b can be either sign).
               * If -NEGcor is used, then '-Grid 3' means 16 divisions
                   in each direction, so that the grid spacing is 0.1
                   if MAX=0.8.  Similarly, '-Grid 4' means 32 divisions
                   in each direction, '-Grid 5' means 64 divisions, etc.
               * I see no reason why you would ever use a -Grid size
                   greater than 5 (==> parameter resolution = 0.025).
               * In my limited experiments, there was little appreciable
                   difference in activation maps between '-Grid 3' and
                   '-Grid 5', especially at the group analysis level.
               * The program is somewhat slower as the -Grid size expands.
                   And uses more memory, to hold various matrices for
                   each (a,b) case.

 -NEGcor    = Allows negative correlations to be used; the default
                is that only positive correlations are searched.
                When this option is used, the range of a scanned
                is -am .. +am; otherwise, it is 0 .. +am.
               * Note that when -NEGcor is used, the number of grid
                   points in the a direction doubles to cover the
                   range -am .. 0; this will slow the program down.
 -POScor    = Do not allow negative correlations.  Since this is
                the default, you don't actually need this option.
                [FMRI data doesn't seem to need the modeling  ]
                [of negative correlations, but you never know.]

 -Mfilt mr  = After finding the best fit parameters for each voxel
                in the mask, do a 3D median filter to smooth these
                parameters over a ball with radius 'mr' mm, and then
                use THOSE parameters to compute the final output.
               * If mr < 0, -mr is the ball radius in voxels,
                   instead of millimeters.
                [No median filtering is done unless -Mfilt is used.]

 -CORcut cc = The exact ARMA(1,1) correlation matrix (for a != 0)
                has no non-zero entries.  The calculations in this
                program set correlations below a cutoff to zero.
                The default cutoff is 0.003, but can be altered with
                this option.  The only reason to use this option is
                to test the sensitivity of the results to the cutoff.

 -ABfile ff = Instead of estimating the ARMA(a,b) parameters from the
                data, read them from dataset 'ff', which should have
                2 float-valued sub-bricks.
               * Note that the (a,b) values read from this file will
                   be mapped to the nearest ones on the (a,b) grid
                   before being used to solve the generalized least
                   squares problem.  For this reason, you may want
                   to use '-Grid 5' to make the (a,b) grid finer, if
                   you are not using (a,b) values from a -Rvar file.
               * Using this option will skip the slowest part of
                   the program, which is the scan for each voxel
                   to find its optimal (a,b) parameters.
               * One possible application of -ABfile:
                  + save (a,b) using -Rvar in 3dREMLfit
                  + process them in some way (spatial smoothing?)
                  + use these modified values for fitting in 3dREMLfit
                      (you should use '-Grid 5' for such a case)
               * Another possible application of -ABfile:
                  + use (a,b) from -Rvar to speed up a run with -Rglt
                      when you want to run some more contrast tests.
               * Special case:
                     -ABfile =0.7,-0.3
                   e.g., means to use a=0.7 and b=-0.3 for all voxels.
                   The program detects this special case by looking for
                   '=' as the first character of the string 'ff' and
                   looking for a comma in the middle of the string.
                   The values of a and b must be in the range -0.9..+0.9.

 -GOFORIT   = 3dREMLfit checks the regression matrix for tiny singular
                values (as 3dDeconvolve does).  If the matrix is too
                close to being rank-deficient, then the program will
                not proceed.  You can use this option to force the
                program to continue past such a failed collinearity
                check, but you must check your results to see if they
                make sense!

---------------------
Miscellaneous Options
---------------------
 -quiet = turn off most progress messages
 -verb  = turn on more progress messages
            (including memory usage reports at various stages)

==========================================================================
===========  Various Notes (as if this help weren't long enough) =========
==========================================================================

------------------
What is ARMA(1,1)?
------------------
* The correlation coefficient r(k) of noise samples k units apart in time,
    for k >= 1, is given by r(k) = lam * a^(k-1)
    where                   lam  = (b+a)(1+a*b)/(1+2*a*b+b*b)
    (N.B.: lam=a when b=0 -- AR(1) noise has r(k)=a^k for k >= 0)
    (N.B.: lam=b when a=0 -- MA(1) noise has r(k)=b for k=1, r(k)=0 for k>1)
* lam can be bigger or smaller than a, depending on the sign of b:
    b > 0 means lam > a;  b < 0 means lam < a.
* What I call (a,b) here is sometimes called (p,q) in the ARMA literature.
* For a noise model which is the sum of AR(1) and white noise, 0 < lam < a
    (i.e., a > 0  and  -a < b < 0 ).
* The natural range of a and b is -1..+1.  However, unless -NEGcor is
    given, only non-negative values of a will be used, and only values
    of b that give lam > 0 will be allowed.  Also, the program doesn't
    allow values of a or b to be outside the range -0.9..+0.9.
* The program sets up the correlation matrix using the censoring and run
    start information saved in the header of the .xmat.1D matrix file, so
    that the actual correlation matrix used will not always be Toeplitz.
* The 'Rvar' dataset has 4 sub-bricks with variance parameter estimates:
    #0 = a = factor by which correlations decay from lag k to lag k+1
    #1 = b parameter
    #2 = lam (see the formula above) = correlation at lag 1
    #3 = standard deviation of ARMA(1,1) noise in that voxel
* The 'Rbeta' dataset has the beta (model fit) parameters estimates
    computed from the pre-whitened time series data in each voxel,
    as in 3dDeconvolve's '-cbucket' output, in the order in which
    they occur in the matrix.  -addbase and -slibase beta values
    come last in this file.
* The 'Rbuck' dataset has the beta parameters and their statistics
    mixed together, as in 3dDeconvolve's '-bucket' output.

-------------------------------------------------------------------
What is REML = REsidual (or REstricted) Maximum Likelihood, anyway?
-------------------------------------------------------------------
* Ordinary Least SQuares (which assumes the noise correlation matrix is
    the identity) is consistent for estimating regression parameters,
    but is not consistent for estimating the noise variance if the
    noise is significantly correlated in time ('serial correlation').
* Maximum likelihood estimation (ML) of the regression parameters and
    variance/correlation together is asymptotically consistent as the
    number of samples goes to infinity, but the variance estimates
    might still have significant bias at a 'reasonable' number of
    data points.
* REML estimates the variance/correlation parameters in a space
    of residuals -- the part of the data left after the model fit
    is subtracted.  The amusing/cunning part is that the model fit
    used to define the residuals is itself the generalized least
    squares fit where the variance/correlation matrix is the one found
    by the REML fit itself.  This feature makes REML estimation nonlinear,
    and the REML equations are usually solved iteratively, to maximize
    the log-likelihood in the restricted space.  In this program, the
    REML function is instead simply optimized over a finite grid of
    the correlation matrix parameters a and b.  The matrices for each
    (a,b) pair are pre-calculated in the setup phase, and then are
    re-used in the voxel loop.  The purpose of this grid-based method
    is speed -- optimizing iteratively to a highly accurate (a,b)
    estimation for each voxel would be very time consuming, and pretty
    pointless.  If you are concerned about the sensitivity of the
    results to the resolution of the (a,b) grid, you can use the
    '-Grid 5' option to increase this resolution and see if your
    activation maps change significantly.  In test cases, the resulting
    betas and statistics have not changed appreciably between '-Grid 3'
    and '-Grid 5'; however, you might want to test this on your own data.
* REML estimates of the variance/correlation parameters are still
    biased, but are generally significantly less biased than ML estimates.
    Also, the regression parameters (betas) should be estimated somewhat
    more accurately (i.e., with smaller variance than OLSQ).  However,
    this effect is generally small in FMRI data, and probably won't affect
    your group results noticeably.
* After the (a,b) parameters are estimated, then the solution to the
    linear system is available via Generalized Least SQuares; that is,
    via pre-whitening using the Choleski factor of the estimated
    variance/covariance matrix.
* In the case with b=0 (that is, AR(1) correlations), and if there are
    no time gaps (no censoring, no run breaks), then it is possible to
    directly estimate the a parameter without using REML.  This program
    does not implement such a special method.  Don't ask why.

----------------
Other Commentary
----------------
* ARMA(1,1) parameters 'a' (AR) and 'b' (MA) are estimated
    only on a discrete grid, for the sake of CPU time.
* Each voxel gets a separate pair of 'a' and 'b' parameters.
    There is no option to estimate global values for 'a' and 'b'
    and use those for all voxels.  Such an approach might be called
    'elementary school statistics' by some people.
* OLSQ = Ordinary Least SQuares; these outputs can be used to compare
         the REML/GLSQ estimations with the simpler OLSQ results
         (and to test this program vs. 3dDeconvolve).
* GLSQ = Generalized Least SQuares = estimated linear system solution
         taking into account the variance/covariance matrix of the noise.
* The '-matrix' file must be from 3dDeconvolve; besides the regression
    matrix itself, the header contains the stimulus labels, the GLTs,
    the censoring information, etc.
* If you don't actually want the OLSQ results from 3dDeconvolve, you can
    make that program stop after the X matrix file is written out by using
    the '-x1D_stop' option, and then running 3dREMLfit; something like this:
      3dDeconvolve -bucket Fred -input1D '1D: 800@0' -TR_1D 2.5 -x1D_stop ...
      3dREMLfit -matrix Fred.xmat.1D -input ...
    In the above example, no 3D dataset is input to 3dDeconvolve, so as to
    avoid the overhead of having to read it in for no reason.  Instead,
    an all-zero time series of the appropriate length (here, 800 points)
    and appropriate TR (here, 2.5 seconds) is given to properly establish
    the size and timing of the matrix file.
* The bucket output datasets are structured to mirror the output
    from 3dDeconvolve with the default options below:
      -nobout -full_first
    Note that you CANNOT use options like '-bout', '-nocout', and
    '-nofull_first' with 3dREMLfit -- the bucket datasets are ordered
    the way they are and you'll just have to live with it.
* All output datasets are in float format.
    Internal calculations are done in double precision.
* If the regression matrix (including any added columns from '-addbase'
    or '-slibase') is rank-deficient (e.g., has collinear columns),
    then the program will print a message something like
      ** ERROR: X matrix has 1 tiny singular value -- collinearity
    At this time, the program will NOT continue past this error.
* Despite my best efforts, this program is somewhat sluggish.
    Partly because it solves many linear systems for each voxel,
    trying to find the 'best' ARMA(1,1) pre-whitening matrix.
    However, a careful choice of algorithms for solving the linear
    systems (QR method, sparse matrix operations, etc.) and some
    other code optimizations should make running 3dREMLfit tolerable.
    Depending on the matrix and the options, you might expect CPU time
    to be about 1..3 times that of the corresponding 3dDeconvolve run.
    (Slower than that if you use '-slibase' and/or '-Grid 5', however.)

-----------------------------------------------------------
To Dream the Impossible Dream, to Write the Uncodeable Code
-----------------------------------------------------------
* Add a -jobs option to use multiple CPUs (or multiple Steves?).
* Add options for -iresp/-sresp for -stim_times?
* Output variance estimates for the betas, to be carried to the
    inter-subject (group) analysis level?
* Prevent Daniel Glen from referring to this program as 3dARMAgeddon.
* Establish incontrovertibly the nature of quantum mechanical observation!

----------------------------------------------------------
* For more information, see the contents of
    http://afni.nimh.nih.gov/pub/dist/doc/misc/3dREMLfit/
  which includes comparisons of 3dDeconvolve and 3dREMLfit
  activations (individual subject and group maps), and an
  outline of the mathematics implemented in this program.
----------------------------------------------------------

============================
== RWCox - July-Sept 2008 ==
============================

++ Compile date = Mar 13 2009




AFNI program: 3dROIstats
Usage: 3dROIstats -mask[n] mset [options] datasets

   Display statistics over masked regions.  The default statistic
   is the mean.

   There will be one line of output for every sub-brick of every
   input dataset.  Across each line will be every statistic for
   every mask value.  For instance, if there 3 mask values (1,2,3),
   then the columns Mean_1, Mean_2 and Mean_3 will refer to the
   means across each mask value, respectively.  If 4 statistics are
   requested, then there will be 12 stats displayed on each line
   (4 for each mask region), besides the file and sub-brick number.

Examples:

   3dROIstats -mask mask+orig. 'func_slim+orig[1,3,5]'

   3dROIstats -minmax -sigma -mask mask+orig. 'func_slim+orig[1,3,5]'

Options:
  -mask[n] mset Means to use the dataset 'mset' as a mask:
                 If n is present, it specifies which sub-brick
                 in mset to use a la 3dcalc.  Note: do not include
                 the brackets if specifying a sub-brick, they are
                 there to indicate that they are optional.  If not
                 present, 0 is assumed
                 Voxels with the same nonzero values in 'mset'
                 will be statisticized from 'dataset'.  This will
                 be repeated for all the different values in mset.
                 I.e. all of the 1s in mset are one ROI, as are all
                 of the 2s, etc.
                 Note that the mask dataset and the input dataset
                 must have the same number of voxels and that mset
                 must be BYTE or SHORT (i.e., float masks won't work
                 without the -mask_f2short option).
                 
  -mask_f2short  Tells the program to convert a float mask to short
                 integers, by simple rounding.  This option is needed
                 when the mask dataset is a 1D file, for instance
                 (since 1D files are read as floats).

                 Be careful with this, it may not be appropriate to do!

  -numROI n     Forces the assumption that the mask dataset's ROIs are
                 denoted by 1 to n inclusive.  Normally, the program
                 figures out the ROIs on its own.  This option is 
                 useful if a) you are certain that the mask dataset
                 has no values outside the range [0 n], b) there may 
                 be some ROIs missing between [1 n] in the mask data-
                 set and c) you want those columns in the output any-
                 way so the output lines up with the output from other
                 invocations of 3dROIstats.  Confused?  Then don't use
                 this option!

  -debug        Print out debugging information
  -quiet        Do not print out labels for columns or rows
  -nobriklab    Do not print the sub-brick label next to its index
  -1Dformat     Output results in a 1D format that includes 
                commented labels

The following options specify what stats are computed.  By default
the mean is always computed.

  -nzmean       Compute the mean using only non_zero voxels.  Implies
                 the opposite for the normal mean computed
  -nzvoxels     Compute the number of non_zero voxels
  -minmax       Compute the min/max of all voxels
  -nzminmax     Compute the min/max of non_zero voxels
  -sigma        Means to compute the standard deviation as well
                 as the mean.
  -median       Compute the median of all voxels.
  -nzmedian     Compute the median of non_zero voxels.
  -summary      Only output a summary line with the grand mean across all briks
                 in the input dataset. 

The output is printed to stdout (the terminal), and can be
saved to a file using the usual redirection operation '>'.

N.B.: The input datasets and the mask dataset can use sub-brick
      selectors, as detailed in the output of 3dcalc -help.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dRegAna
++ 3dRegAna: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs multiple linear regression analysis.          

Usage: 
3dRegAna 
-rows n                             number of input datasets          
-cols m                             number of X variables             
-xydata X11 X12 ... X1m filename    X variables and Y observations    
  .                                   .                               
  .                                   .                               
  .                                   .                               
-xydata Xn1 Xn2 ... Xnm filename    X variables and Y observations    
                                                                      
-model i1 ... iq : j1 ... jr   definition of linear regression model; 
                                 reduced model:                       
                                   Y = f(Xj1,...,Xjr)                 
                                 full model:                          
                                   Y = f(Xj1,...,Xjr,Xi1,...,Xiq)     
                                                                      
[-diskspace]       print out disk space required for program execution
[-workmem mega]    number of megabytes of RAM to use for statistical  
                   workspace  (default = 750 (was 12))                
[-rmsmin r]        r = minimum rms error to reject constant model     
[-fdisp fval]      display (to screen) results for those voxels       
                   whose F-statistic is > fval                        
                                                                      
[-flof alpha]      alpha = minimum p value for F due to lack of fit   
                                                                      
                                                                      
The following commands generate individual AFNI 2 sub-brick datasets: 
                                                                      
[-fcoef k prefixname]        estimate of kth regression coefficient   
                               along with F-test for the regression   
                               is written to AFNI `fift' dataset      
[-rcoef k prefixname]        estimate of kth regression coefficient   
                               along with coef. of mult. deter. R^2   
                               is written to AFNI `fith' dataset      
[-tcoef k prefixname]        estimate of kth regression coefficient   
                               along with t-test for the coefficient  
                               is written to AFNI `fitt' dataset      
                                                                      
                                                                      
The following commands generate one AFNI 'bucket' type dataset:       
                                                                      
[-bucket n prefixname]     create one AFNI 'bucket' dataset having    
                             n sub-bricks; n=0 creates default output;
                             output 'bucket' is written to prefixname 
The mth sub-brick will contain:                                       
[-brick m coef k label]    kth parameter regression coefficient       
[-brick m fstat label]     F-stat for significance of regression      
[-brick m rstat label]     coefficient of multiple determination R^2  
[-brick m tstat k label]   t-stat for kth regression coefficient      

[-datum DATUM]     write the output in DATUM format. 
                   Choose from short (default) or float.


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -xydata command. That is, if an input dataset contains
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -xydata 2.17 4.59 7.18  'fred+orig[3]'                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dRowFillin
Usage: 3dRowFillin [options] dataset
Extracts 1D rows in the given direction from a 3D dataset,
searches for blank (zero) regions, and fills them in if
the blank region isn't too large and it is flanked by
the same value on either edge.  For example:
     input row = 0 1 2 0 0 2 3 0 3 0 0 4 0
    output row = 0 1 2 2 2 2 3 3 3 0 0 4 0

OPTIONS:
 -maxgap N  = set the maximum length of a blank region that
                will be filled in to 'N' [default=9].
 -dir D     = set the direction of fill to 'D', which can
                be one of the following:
                  A-P, P-A, I-S, S-I, L-R, R-L, x, y, z
                The first 6 are anatomical directions;
                the last 3 are reference to the dataset
                internal axes [no default value].
 -prefix P  = set the prefix to 'P' for the output dataset.

N.B.: If the input dataset has more than one sub-brick,
      only the first one will be processed.

The intention of this program is to let you fill in slice gaps
made when drawing ROIs with the 'Draw Dataset' plugin.  If you
draw every 5th coronal slice, say, then you could fill in using
  3dRowFillin -maxgap 4 -dir A-P -prefix fredfill fred+orig


++ Compile date = Mar 13 2009




AFNI program: 3dSkullStrip

Usage: A program to extract the brain from surrounding.
  tissue from MRI T1-weighted images. The fully automated
  process consists of three steps:
  1- Preprocessing of volume to remove gross spatial image 
  non-uniformity artifacts and reposition the brain in
  a reasonable manner for convenience.
  2- Expand a spherical surface iteratively until it envelopes
  the brain. This is a modified version of the BET algorithm:
     Fast robust automated brain extraction, 
      by Stephen M. Smith, HBM 2002 v 17:3 pp 143-155
    Modifications include the use of:
     . outer brain surface
     . expansion driven by data inside and outside the surface
     . avoidance of eyes and ventricles
     . a set of operations to avoid the clipping of certain brain
       areas and reduce leakage into the skull in heavily shaded
       data
     . two additional processing stages to ensure convergence and
       reduction of clipped areas.
     . use of 3d edge detection, see Deriche and Monga references
       in 3dedge3 -help.
  3- The creation of various masks and surfaces modeling brain
     and portions of the skull

  Common examples of usage:
  -------------------------
  o 3dSkullStrip -input VOL -prefix VOL_PREFIX
     Vanilla mode, should work for most datasets.
  o 3dSkullStrip -input VOL -prefix VOL_PREFIX -push_to_edge
     Adds an agressive push to brain edges. Use this option
     when the chunks of gray matter are not included. This option
     might cause the mask to leak into non-brain areas.
  o 3dSkullStrip -input VOL -surface_coil -prefix VOL_PREFIX -monkey
     Vanilla mode, for use with monkey data.
  o 3dSkullStrip -input VOL -prefix VOL_PREFIX -ld 30
     Use a denser mesh, in the cases where you have lots of 
     csf between gyri. Also helps when some of the brain is clipped
     close to regions of high curvature.

  Tips:
  -----
     I ran the program with the default parameters on 200+ datasets.
     The results were quite good in all but a couple of instances, here
     are some tips on fixing trouble spots:

     Clipping in frontal areas, close to the eye balls:
        + Try -push_to_edge option first.
          Can also try -no_avoid_eyes option.
     Clipping in general:
        + Try -push_to_edge option first.
          Can also use lower -shrink_fac, start with 0.5 then 0.4
     Problems down below:
        + Piece of cerbellum missing, reduce -shrink_fac_bot_lim 
          from default value.
        + Leakage in lower areas, increase -shrink_fac_bot_lim 
          from default value.
     Some lobules are not included:
        + Use a denser mesh. Start with -ld 30. If that still fails,
        try even higher density (like -ld 50) and increase iterations 
        (say to -niter 750). 
        Expect the program to take much longer in that case.
        + Instead of using denser meshes, you could try blurring the data 
        before skull stripping. Something like -blur_fwhm 2 did
        wonders for some of my data with the default options of 3dSkullStrip
        Blurring is a lot faster than increasing mesh density.
        + Use also a smaller -shrink_fac is you have lots of CSF between
        gyri.
     Massive chunks missing:
        + If brain has very large ventricles and lots of CSF between gyri,
        the ventricles will keep attracting the surface inwards. 
        This often happens with older brains. In such 
        cases, use the -visual option to see what is happening.
        For example, the options below did the trick in various
        instances. 
            -blur_fwhm 2 -use_skull  
        or for more stubborn cases increase csf avoidance with this cocktail
            -blur_fwhm 2 -use_skull -avoid_vent -avoid_vent -init_radius 75 

  Eye Candy Mode: 
  ---------------
  You can run BrainWarp and have it send successive iterations
 to SUMA and AFNI. This is very helpful in following the
 progression of the algorithm and determining the source
 of trouble, if any.
  Example:
     afni -niml -yesplugouts &
     suma -niml &
     3dSkullStrip -input Anat+orig -o_ply anat_brain -visual

  Help section for the intrepid:
  ------------------------------
  3dSkullStrip  < -input VOL >
             [< -o_TYPE PREFIX >] [< -prefix VOL_PREFIX >] 
             [< -spatnorm >] [< -no_spatnorm >] [< -write_spatnorm >]
             [< -niter N_ITER >] [< -ld LD >] 
             [< -shrink_fac SF >] [< -var_shrink_fac >] 
             [< -no_var_shrink_fac >] [< -shrink_fac_bot_lim SFBL >]
             [< -pushout >] [< -no_pushout >] [< -exp_frac FRAC]
             [< -touchup >] [< -no_touchup >]
             [< -fill_hole R >] [< -NN_smooth NN_SM >]
             [< -smooth_final SM >] [< -avoid_vent >] [< -no_avoid_vent >]
             [< -use_skull >] [< -no_use_skull >] 
             [< -avoid_eyes >] [< -no_avoid_eyes >] 
             [< -use_edge >] [< -no_use_edge >] 
             [< -push_to_edge >] [<-no_push_to_edge>]
             [< -perc_int PERC_INT >] 
             [< -max_inter_iter MII >] [-mask_vol | -orig_vol | -norm_vol]
             [< -debug DBG >] [< -node_debug NODE_DBG >]
             [< -demo_pause >]
             [< -monkey >] [< -marmoset >] [<-rat>]

  NOTE: Please report bugs and strange failures
        to saadz@mail.nih.gov

  Mandatory parameters:
     -input VOL: Input AFNI (or AFNI readable) volume.
                 

  Optional Parameters:
     -monkey: the brain of a monkey.
     -marmoset: the brain of a marmoset. 
                this one was tested on one dataset
                and may not work with non default
                options. Check your results!
     -rat: the brain of a rat.
           By default, no_touchup is used with the rat.
     -surface_coil: Data acquired with a surface coil.
     -o_TYPE PREFIX: prefix of output surface.
        where TYPE specifies the format of the surface
        and PREFIX is, well, the prefix.
        TYPE is one of: fs, 1d (or vec), sf, ply.
        More on that below.
     -skulls: Output surface models of the skull.
     -4Tom:   The output surfaces are named based
             on PREFIX following -o_TYPE option below.
     -prefix VOL_PREFIX: prefix of output volume.
        If not specified, the prefix is the same
        as the one used with -o_TYPE.
        The output volume is skull stripped version
        of the input volume. In the earlier version
        of the program, a mask volume was written out.
        You can still get that mask volume instead of the
        skull-stripped volume with the option -mask_vol . 
        NOTE: In the default setting, the output volume does not 
              have values identical to those in the input. 
              In particular, the range might be larger 
              and some low-intensity values are set to 0.
              If you insist on having the same range of values as in
              the input, then either use option -orig_vol, or run:
         3dcalc -nscale -a VOL+VIEW -b VOL_PREFIX+VIEW \
                -expr 'a*step(b)' -prefix VOL_SAME_RANGE
              With the command above, you can preserve the range
              of values of the input but some low-intensity voxels would
              still be masked. If you want to preserve them, then use
              -mask_vol in the 3dSkullStrip command that would produce 
              VOL_MASK_PREFIX+VIEW. Then run 3dcalc masking with voxels
              inside the brain surface envelope:
         3dcalc -nscale -a VOL+VIEW -b VOL_MASK_PREFIX+VIEW \
                -expr 'a*step(b-3.01)' -prefix VOL_SAME_RANGE_KEEP_LOW
     -norm_vol: Output a masked and somewhat intensity normalized and 
                thresholded version of the input. This is the default,
                and you can use -orig_vol to override it.
     -orig_vol: Output a masked version of the input AND do not modify
                the values inside the brain as -norm_vol would.
     -mask_vol: Output a mask volume instead of a skull-stripped
                volume.
                The mask volume containes:
                 0: Voxel outside surface
                 1: Voxel just outside the surface. This means the voxel
                    center is outside the surface but inside the 
                    bounding box of a triangle in the mesh. 
                 2: Voxel intersects the surface (a triangle), but center
                    lies outside.
                 3: Voxel contains a surface node.
                 4: Voxel intersects the surface (a triangle), center lies
                    inside surface. 
                 5: Voxel just inside the surface. This means the voxel
                    center is inside the surface and inside the 
                    bounding box of a triangle in the mesh. 
                 6: Voxel inside the surface. 
     -spat_norm: (Default) Perform spatial normalization first.
                 This is a necessary step unless the volume has
                 been 'spatnormed' already.
     -no_spatnorm: Do not perform spatial normalization.
                   Use this option only when the volume 
                   has been run through the 'spatnorm' process
     -spatnorm_dxyz DXYZ: Use DXY for the spatial resolution of the
                          spatially normalized volume. The default 
                          is the lowest of all three dimensions.
                          For human brains, use DXYZ of 1.0, for
                          primate brain, use the default setting.
     -write_spatnorm: Write the 'spatnormed' volume to disk.
     -niter N_ITER: Number of iterations. Default is 250
        For denser meshes, you need more iterations
        N_ITER of 750 works for LD of 50.
     -ld LD: Parameter to control the density of the surface.
             Default is 20 if -no_use_edge is used,
             30 with -use_edge. See CreateIcosahedron -help
             for details on this option.
     -shrink_fac SF: Parameter controlling the brain vs non-brain
             intensity threshold (tb). Default is 0.6.
              tb = (Imax - t2) SF + t2 
             where t2 is the 2 percentile value and Imax is the local
             maximum, limited to the median intensity value.
             For more information on tb, t2, etc. read the BET paper
             mentioned above. Note that in 3dSkullStrip, SF can vary across 
             iterations and might be automatically clipped in certain areas.
             SF can vary between 0 and 1.
             0: Intensities < median inensity are considered non-brain
             1: Intensities < t2 are considered non-brain
     -var_shrink_fac: Vary the shrink factor with the number of
             iterations. This reduces the likelihood of a surface
             getting stuck on large pools of CSF before reaching
             the outer surface of the brain. (Default)
     -no_var_shrink_fac: Do not use var_shrink_fac.
     -shrink_fac_bot_lim SFBL: Do not allow the varying SF to go
             below SFBL . Default 0.65, 0.4 when edge detection is used. 
             This option helps reduce potential for leakage below 
             the cerebellum.
             In certain cases where you have severe non-uniformity resulting
             in low signal towards the bottom of the brain, you will need to
             reduce this parameter.
     -pushout: Consider values above each node in addition to values
               below the node when deciding on expansion. (Default)
     -no_pushout: Do not use -pushout.
     -exp_frac FRAC: Speed of expansion (see BET paper). Default is 0.1.
     -touchup: Perform touchup operations at end to include
               areas not covered by surface expansion. 
               Use -touchup -touchup for aggressive makeup.
               (Default is -touchup)
     -no_touchup: Do not use -touchup
     -fill_hole R: Fill small holes that can result from small surface
                   intersections caused by the touchup operation.
                   R is the maximum number of pixels on the side of a hole
                   that can be filled. Big holes are not filled.
                   If you use -touchup, the default R is 10. Otherwise 
                   the default is 0.
                   This is a less than elegant solution to the small
                   intersections which are usually eliminated
                   automatically. 
     -NN_smooth NN_SM: Perform Nearest Neighbor coordinate interpolation
                       every few iterations. Default is 72
     -smooth_final SM: Perform final surface smoothing after all iterations.
                       Default is 20 smoothing iterations.
                       Smoothing is done using Taubin's method, 
                       see SurfSmooth -help for detail.
     -avoid_vent: avoid ventricles. Default.
                  Use this option twice to make the avoidance more
                  agressive. That is at times needed with old brains.
     -no_avoid_vent: Do not use -avoid_vent.
     -init_radius RAD: Use RAD for the initial sphere radius.
                       For the automatic setting, there is an
                       upper limit of 100mm for humans.
                       For older brains with lots of CSF, you
                       might benefit from forcing the radius 
                       to something like 75mm
     -avoid_eyes: avoid eyes. Default
     -no_avoid_eyes: Do not use -avoid_eyes.
     -use_edge: Use edge detection to reduce leakage into meninges and eyes.
                Default.
     -no_use_edge: Do no use edges.
     -push_to_edge: Perform aggressive push to edge at the end.
                    This option might cause leakage.
     -no_push_to_edge: (Default).
     -use_skull: Use outer skull to limit expansion of surface into
                 the skull due to very strong shading artifacts.
                 This option is buggy at the moment, use it only 
                 if you have leakage into skull.
     -no_use_skull: Do not use -use_skull (Default).
     -send_no_skull: Do not send the skull surface to SUMA if you are
                     using  -talk_suma
     -perc_int PERC_INT: Percentage of segments allowed to intersect
                         surface. Ideally this should be 0 (Default). 
                         However, few surfaces might have small stubborn
                         intersections that produce a few holes.
                         PERC_INT should be a small number, typically
                         between 0 and 0.1
. A -1 means do not do
                         any testing for intersection.
     -max_inter_iter N_II: Number of iteration to remove intersection
                           problems. With each iteration, the program
                           automatically increases the amount of smoothing
                           to get rid of intersections. Default is 4
     -blur_fwhm FWHM: Blur dset after spatial normalization.
                      Recommended when you have lots of CSF in brain
                      and when you have protruding gyri (finger like)
                      Recommended value is 2..4. 
     -interactive: Make the program stop at various stages in the 
                   segmentation process for a prompt from the user
                   to continue or skip that stage of processing.
                   This option is best used in conjunction with options
                   -talk_suma and -feed_afni
     -demo_pause: Pause at various step in the process to facilitate
                  interactive demo while 3dSkullStrip is communicating
                  with AFNI and SUMA. See 'Eye Candy' mode below and
                  -talk_suma option. 

 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.

  SUMA communication options:
      -talk_suma: Send progress with each iteration to SUMA.
      -refresh_rate rps: Maximum number of updates to SUMA per second.
                         The default is the maximum speed.
      -send_kth kth: Send the kth element to SUMA (default is 1).
                     This allows you to cut down on the number of elements
                     being sent to SUMA.
      -sh : Name (or IP address) of the computer running SUMA.
                      This parameter is optional, the default is 127.0.0.1 
      -ni_text: Use NI_TEXT_MODE for data transmission.
      -ni_binary: Use NI_BINARY_MODE for data transmission.
                  (default is ni_binary).
      -feed_afni: Send updates to AFNI via SUMA's talk.


     -visual: Equivalent to using -talk_suma -feed_afni -send_kth 5

     -debug DBG: debug levels of 0 (default), 1, 2, 3.
        This is no Rick Reynolds debug, which is oft nicer
        than the results, but it will do.
     -node_debug NODE_DBG: Output lots of parameters for node
                         NODE_DBG for each iteration.
     The next 3 options are for specifying surface coordinates
     to keep the program from having to recompute them.
     The options are only useful for saving time during debugging.
     -brain_contour_xyz_file BRAIN_CONTOUR_XYZ.1D
     -brain_hull_xyz_file BRAIN_HULL_XYZ.1D
     -skull_outer_xyz_file SKULL_OUTER_XYZ.1D

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: 3dStatClust
++ 3dStatClust: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
Perform agglomerative hierarchical clustering for user specified 
parameter sub-bricks, for all voxels whose threshold statistic   
is above a user specified value.

Usage: 3dStatClust options datasets 
where the options are:
-prefix pname    = Use 'pname' for the output dataset prefix name.
  OR                 [default='SC']
-output pname

-session dir     = Use 'dir' for the output dataset session directory.
                     [default='./'=current working directory]
-verb            = Print out verbose output as the program proceeds.

Options for calculating distance between parameter vectors: 
   -dist_euc        = Calculate Euclidean distance between parameters 
   -dist_ind        = Statistical distance for independent parameters 
   -dist_cor        = Statistical distance for correlated parameters 
The default option is:  Euclidean distance. 

-thresh t tname  = Use threshold statistic from file tname. 
                   Only voxels whose threshold statistic is greater 
                   than t in abolute value will be considered. 
                     [If file tname contains more than 1 sub-brick, 
                     the threshold stat. sub-brick must be specified!]
-nclust n        = This specifies the maximum number of clusters for 
                   output (= number of sub-bricks in output dataset).

Command line arguments after the above are taken as parameter datasets.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dSurf2Vol

3dSurf2Vol - map data from a surface domain to an AFNI volume domain

  usage: 3dSurf2Vol [options] -spec SPEC_FILE -surf_A SURF_NAME \
             -grid_parent AFNI_DSET -sv SURF_VOL \
             -map_func MAP_FUNC -prefix OUTPUT_DSET

    This program is meant to take as input a pair of surfaces,
    optionally including surface data, and an AFNI grid parent
    dataset, and to output a new AFNI dataset consisting of the
    surface data mapped to the dataset grid space.  The mapping
    function determines how to map the surface values from many
    nodes to a single voxel.

    Surfaces (from the spec file) are specified using '-surf_A'
    (and '-surf_B', if a second surface is input).  If two
    surfaces are input, then the computed segments over node
    pairs will be in the direction from surface A to surface B.

    The basic form of the algorithm is:

       o for each node pair (or single node)
           o form a segment based on the xyz node coordinates,
             adjusted by any '-f_pX_XX' options
           o divide the segment up into N steps, according to 
             the '-f_steps' option
           o for each segment point
               o if the point is outside the space of the output
                 dataset, skip it
               o locate the voxel in the output dataset which
                 corresponds to this segment point
               o if the '-cmask' option was given, and the voxel
                 is outside the implied mask, skip it
               o if the '-f_index' option is by voxel, and this
                 voxel has already been considered, skip it
               o insert the surface node value, according to the
                 user-specified '-map_func' option

  Surface Coordinates:

      Surface coordinates are assumed to be in the Dicom
      orientation.  This information may come from the option
      pair of '-spec' and '-sv', with which the user provides
      the name of the SPEC FILE and the SURFACE VOLUME, along
      with '-surf_A' and optionally '-surf_B', used to specify
      actual surfaces by name.  Alternatively, the surface
      coordinates may come from the '-surf_xyz_1D' option.
      See these option descriptions below.

      Note that the user must provide either the three options
      '-spec', '-sv' and '-surf_A', or the single option,
      '-surf_xyz_1D'.

  Surface Data:

      Surface domain data can be input via the '-sdata_1D'
      option.  In such a case, the data is with respect to the
      input surface.  The first column of the sdata_1D file
      should be a node index, and following columns are that
      node's data.  See the '-sdata_1D' option for more info.

      If the surfaces have V values per node (pair), then the
      resulting AFNI dataset will have V sub-bricks (unless the
      user applies the '-data_expr' option).

  Mapping Functions:

      Mapping functions exist because a single volume voxel may
      be occupied by multiple surface nodes or segment points.
      Depending on how dense the surface mesh is, the number of
      steps provided by the '-f_steps' option, and the indexing
      type from '-f_index', even a voxel which is only 1 cubic
      mm in volume may have quite a few contributing points.

      The mapping function defines how multiple surface values
      are combined to get a single result in each voxel.  For
      example, the 'max' function will take the maximum of all
      surface values contributing to each given voxel.

      Current mapping functions are listed under the '-map_func'
      option, below.

------------------------------------------------------------

  examples:

    1. Map a single surface to an anatomical volume domain,
       creating a simple mask of the surface.  The output
       dataset will be fred_surf+orig, and the orientation and
       grid spacing will follow that of the grid parent.  The
       output voxels will be 1 where the surface exists, and 0
       elsewhere.

    3dSurf2Vol                       \
       -spec         fred.spec                \
       -surf_A       pial                     \
       -sv           fred_anat+orig           \
       -grid_parent  fred_anat+orig           \
       -map_func     mask                     \
       -prefix       fred_surf

    2. Map the cortical grey ribbon (between the white matter
       surface and the pial surface) to an AFNI volume, where
       the resulting volume is restricted to the mask implied by
       the -cmask option.

       Surface data will come from the file sdata_10.1D, which
       has 10 values per node, and lists only a portion of the
       entire set of surface nodes.  Each node pair will be form
       a segment of 15 equally spaced points, the values from
       which will be applied to the output dataset according to
       the 'ave' filter.  Since the index is over points, each
       of the 15 points will have its value applied to the
       appropriate voxel, even multiple times.  This weights the
       resulting average by the fraction of each segment that
       occupies a given voxel.

       The output dataset will have 10 sub-bricks, according to
       the 10 values per node index in sdata_10.1D.

    3dSurf2Vol                       \
       -spec         fred.spec                               \
       -surf_A       smoothwm                                \
       -surf_B       pial                                    \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_func+orig[0]'                      \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -sdata_1D     sdata_10.1D                             \
       -map_func     ave                                     \
       -f_steps      15                                      \
       -f_index      points                                  \
       -prefix       fred_surf_ave

    3. The inputs in this example are identical to those in
       example 2, including the surface dataset, sdata_10.1D.
       Again, the output dataset will have 10 sub-bricks.

       The surface values will be applied via the 'max_abs'
       filter, with the intention of assigning to each voxel the
       node value with the most significance.  Here, the index
       method does not matter, so it is left as the default,
       'voxel'.

       In this example, each node pair segment will be extended
       by 20% into the white matter, and by 10% outside of the
       grey matter, generating a "thicker" result.

    3dSurf2Vol                       \
       -spec         fred.spec                               \
       -surf_A       smoothwm                                \
       -surf_B       pial                                    \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_func+orig[0]'                      \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -sdata_1D     sdata_10.1D                             \
       -map_func     max_abs                                 \
       -f_steps      15                                      \
       -f_p1_fr      -0.2                                    \
       -f_pn_fr       0.1                                    \
       -prefix       fred_surf_max_abs

    4. This is similar to example 2.  Here, the surface nodes
       (coordinates) come from 'surf_coords_2.1D'.  But these
       coordinates do not happen to be in Dicom orientation,
       they are in the same orientation as the grid parent, so
       the '-sxyz_orient_as_gpar' option is applied.

       Even though the data comes from 'sdata_10.1D', the output
       AFNI dataset will only have 1 sub-brick.  That is because
       of the '-data_expr' option.  Here, each applied surface
       value will be the average of the sines of the first 3
       data values (columns of sdata_10.1D).

    3dSurf2Vol                       \
       -surf_xyz_1D  surf_coords_2.1D                        \
       -sxyz_orient_as_gpar                                  \
       -grid_parent 'fred_func+orig[0]'                      \
       -sdata_1D     sdata_10.1D                             \
       -data_expr   '(sin(a)+sin(b)+sin(c))/3'               \
       -map_func     ave                                     \
       -f_steps      15                                      \
       -f_index      points                                  \
       -prefix       fred_surf_ave_sine

    5. In this example, voxels will get the maximum value from
       column 3 of sdata_10.1D (as usual, column 0 is used for
       node indices).  The output dataset will have 1 sub-brick.

       Here, the output dataset is forced to be of type 'short',
       regardless of what the grid parent is.  Also, there will
       be no scaling factor applied.

       To track the numbers for surface node #1234, the '-dnode'
       option has been used, along with '-debug'.  Additionally,
       '-dvoxel' is used to track the results for voxel #6789.

    3dSurf2Vol                       \
       -spec         fred.spec                               \
       -surf_A       smoothwm                                \
       -surf_B       pial                                    \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_func+orig[0]'                      \
       -sdata_1D     sdata_10.1D'[0,3]'                      \
       -map_func     max                                     \
       -f_steps      15                                      \
       -datum        short                                   \
       -noscale                                              \
       -debug        2                                       \
       -dnode        1234                                    \
       -dvoxel       6789                                    \
       -prefix       fred_surf_max

------------------------------------------------------------

  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE        : SUMA spec file

        e.g. -spec fred.spec

        The surface specification file contains the list of
        mappable surfaces that are used.

        See @SUMA_Make_Spec_FS and @SUMA_Make_Spec_SF.

        Note: this option, along with '-sv', may be replaced
              by the '-surf_xyz_1D' option.

    -surf_A SURF_NAME      : specify surface A (from spec file)
    -surf_B SURF_NAME      : specify surface B (from spec file)

        e.g. -surf_A smoothwm
        e.g. -surf_A lh.smoothwm
        e.g. -surf_B lh.pial

        This parameter is used to tell the program with surfaces
        to use.  The '-surf_A' parameter is required, but the
        '-surf_B' parameter is an option.

        The surface names must uniquely match those in the spec
        file, though a sub-string match is good enough.  The
        surface names are compared with the names of the surface
        node coordinate files.

        For instance, given a spec file that has only the left
        hemisphere in it, 'pial' should produce a unique match
        with lh.pial.asc.  But if both hemispheres are included,
        then 'pial' would not be unique (matching rh.pial.asc,
        also).  In that case, 'lh.pial' would be better.

    -sv SURFACE_VOLUME     : AFNI dataset

        e.g. -sv fred_anat+orig

        This is the AFNI dataset that the surface is mapped to.
        This dataset is used for the initial surface node to xyz
        coordinate mapping, in the Dicom orientation.

        Note: this option, along with '-spec', may be replaced
              by the '-surf_xyz_1D' option.

    -surf_xyz_1D SXYZ_NODE_FILE : 1D coordinate file

        e.g. -surf_xyz_1D my_surf_coords.1D

        This ascii file contains a list of xyz coordinates to be
        considered as a surface, or 2 sets of xyz coordinates to
        considered as a surface pair.  As usual, these points
        are assumed to be in Dicom orientation.  Another option
        for coordinate orientation is to use that of the grid
        parent dataset.  See '-sxyz_orient_as_gpar' for details.

        This option is an alternative to the pair of options, 
        '-spec' and '-sv'.

        The number of rows of the file should equal the number
        of nodes on each surface.  The number of columns should
        be either 3 for a single surface, or 6 for two surfaces.
        
        sample line of an input file (one surface):
        
        11.970287  2.850751  90.896111
        
        sample line of an input file (two surfaces):
        
        11.97  2.85  90.90    12.97  2.63  91.45
        

    -grid_parent AFNI_DSET : AFNI dataset

        e.g. -grid_parent fred_function+orig

        This dataset is used as a grid and orientation master
        for the output AFNI dataset.

    -map_func MAP_FUNC     : surface to dataset function

        e.g. -map_func max
        e.g. -map_func mask -f_steps 20

        This function applies to the case where multiple data
        points get mapped to a single voxel, which is expected
        since surfaces tend to have a much higher resolution
        than AFNI volumes.  In the general case data points come
        from each point on each partitioned line segment, with
        one segment per node pair.  Note that these segments may
        have length zero, such as when only a single surface is
        input.

        See "Mapping Functions" above, for more information.

        The current mapping function for one surface is:

          mask   : For each xyz location, set the corresponding
                   voxel to 1.

        The current mapping functions for two surfaces are as
        follows.  These descriptions are per output voxel, and
        over the values of all points mapped to a given voxel.

          mask2  : if any points are mapped to the voxel, set
                   the voxel value to 1

          ave    : average all values

          count  : count the number of mapped data points

          min    : find the minimum value from all mapped points

          max    : find the maximum value from all mapped points

          max_abs: find the number with maximum absolute value
                   (the resulting value will retain its sign)

    -prefix OUTPUT_PREFIX  : prefix for the output dataset

        e.g. -prefix anat_surf_mask

        This is used to specify the prefix of the resulting AFNI
        dataset.

  ------------------------------
  SUB-SURFACE DATA FILE OPTIONS:

    -sdata_1D SURF_DATA.1D : 1D sub-surface file, with data

        e.g. -sdata_1D roi3.1D

        This is used to specify a 1D file, which contains
        surface indices and data.  The indices refer to the
        surface(s) read from the spec file.
        
        The format of this data file is a surface index and a
        list of data values on each row.  To be a valid 1D file,
        each row must have the same number of columns.

  ------------------------------
  OPTIONS SPECIFIC TO SEGMENT SELECTION:

    (see "The basic form of the algorithm" for more details)

    -f_steps NUM_STEPS     : partition segments

        e.g. -f_steps 10
        default: -f_steps 2   (or 1, the number of surfaces)

        This option specifies the number of points to divide
        each line segment into, before mapping the points to the
        AFNI volume domain.  The default is the number of input
        surfaces (usually, 2).  The default operation is to have
        the segment endpoints be the actual surface nodes,
        unless they are altered with the -f_pX_XX options.

    -f_index TYPE          : index by points or voxels

        e.g. -f_index points
        e.g. -f_index voxels
        default: -f_index voxels

        Along a single segment, the default operation is to
        apply only those points mapping to a new voxel.  The
        effect of the default is that a given voxel will have
        at most one value applied per voxel pair.

        If the user applies this option with 'points' or 'nodes'
        as the argument, then every point along the segment will
        be applied.  This may be preferred if, for example, the
        user wishes to have the average weighted by the number
        of points occupying a voxel, not just the number of node
        pair segments.

    Note: the following -f_pX_XX options are used to alter the
          locations of the segment endpoints, per node pair.
          The segments are directed, from the node on the first
          surface to the node on the second surface.  To modify
          the first endpoint, use a -f_p1_XX option, and use
          -f_pn_XX to modify the second.

    -f_p1_fr FRACTION      : offset p1 by a length fraction

        e.g. -f_p1_fr -0.2
        e.g. -f_p1_fr -0.2  -f_pn_fr 0.2

        This option moves the first endpoint, p1, by a distance
        of the FRACTION times the original segment length.  If
        the FRACTION is positive, it moves in the direction of
        the second endpoint, pn.

        In the example, p1 is moved by 20% away from pn, which
        will increase the length of each segment.

    -f_pn_fr FRACTION      : offset pn by a length fraction

        e.g. -f_pn_fr  0.2
        e.g. -f_p1_fr -0.2  -f_pn_fr 0.2

        This option moves pn by a distance of the FRACTION times
        the original segment length, in the direction from p1 to
        pn.  So a positive fraction extends the segment, and a
        negative fraction reduces it.

        In the example above, using 0.2 adds 20% to the segment
        length past the original pn.

    -f_p1_mm DISTANCE      : offset p1 by a distance in mm.

        e.g. -f_p1_mm -1.0
        e.g. -f_p1_mm -1.0  -f_pn_fr 1.0

        This option moves p1 by DISTANCE mm., in the direction
        of pn.  If the DISTANCE is positive, the segment gets
        shorter.  If DISTANCE is negative, the segment will get
        longer.

        In the example, p1 is moved away from pn, extending the
        segment by 1 millimeter.

    -f_pn_mm DISTANCE      : offset pn by a distance in mm.

        e.g. -f_pn_mm  1.0
        e.g. -f_p1_mm -1.0  -f_pn_fr 1.0

        This option moves pn by DISTANCE mm., in the direction
        from the first point to the second.  So if DISTANCE is
        positive, the segment will get longer.  If DISTANCE is
        negative, the segment will get shorter.

        In the example, pn is moved 1 millimeter farther from
        p1, extending the segment by that distance.

  ------------------------------
  GENERAL OPTIONS:

    -cmask MASK_COMMAND    : command for dataset mask

        e.g. -cmask '-a fred_func+orig[2] -expr step(a-0.8)'

        This option will produce a mask to be applied to the
        output dataset.  Note that this mask should form a
        single sub-brick.

        This option follows the style of 3dmaskdump (since the
        code for it was, uh, borrowed from there (thanks Bob!)).

        See '3dmaskdump -help' for more information.

    -data_expr EXPRESSION  : apply expression to surface input

        e.g. -data_expr 17
        e.g. -data_expr '(a+b+c+d)/4'
        e.g. -data_expr '(sin(a)+sin(b))/2'

        This expression is applied to the list of data values
        from the surface data file input via '-sdata_1D'.  The
        expression is applied for each node or node pair, to the
        list of data values corresponding to that node.

        The letters 'a' through 'z' may be used as input, and
        refer to columns 1 through 26 of the data file (where
        column 0 is a surface node index).  The data file must
        have enough columns to support the expression.  It is
        valid to have a constant expression without a data file.

    -datum DTYPE           : set data type in output dataset

        e.g. -datum short
        default: same as that of grid parent

        This option specifies the data type for the output AFNI
        dataset.  Valid choices are byte, short and float, which
        are 1, 2 and 4 bytes for each data point, respectively.

    -debug LEVEL           : verbose output

        e.g. -debug 2

        This option is used to print out status information 
        during the execution of the program.  Current levels are
        from 0 to 5.

    -dnode DEBUG_NODE      : extra output for that node

        e.g. -dnode 123456

        This option requests additional debug output for the
        given surface node.  This index is with respect to the
        input surface (included in the spec file, or through the
        '-surf_xyz_1D' option).

        This will have no effect without the '-debug' option.

    -dvoxel DEBUG_VOXEL    : extra output for that voxel

        e.g. -dvoxel 234567

        This option requests additional debug output for the
        given volume voxel.  This 1-D index is with respect to
        the output AFNI dataset.  One good way to find a voxel
        index to supply is from output via the '-dnode' option.

        This will have no effect without the '-debug' option.

    -hist                  : show revision history

        Display module history over time.

    -help                  : show this help

        If you can't get help here, please get help somewhere.

    -noscale               : no scale factor in output dataset

        If the output dataset is an integer type (byte, shorts
        or ints), then the output dataset may end up with a
        scale factor attached (see 3dcalc -help).  With this
        option, the output dataset will not be scaled.

    -sxyz_orient_as_gpar   : assume gpar orientation for sxyz

        This option specifies that the surface coordinate points
        in the '-surf_xyz_1D' option file have the orientation
        of the grid parent dataset.

        When the '-surf_xyz_1D' option is applied the surface
        coordinates are assumed to be in Dicom orientation, by
        default.  This '-sxyz_orient_as_gpar' option overrides
        the Dicom default, specifying that the node coordinates
        are in the same orientation as the grid parent dataset.

        See the '-surf_xyz_1D' option for more information.

    -version               : show version information

        Show version and compile date.

------------------------------------------------------------

  Author: R. Reynolds  - version  3.6a (March 22, 2005)

                (many thanks to Z. Saad and R.W. Cox)




AFNI program: 3dSurfMask

Usage: 3dSurfMask <-i_TYPE SURFACE> <-prefix PREFIX>
                <-grid_parent GRID_VOL> [-sv SURF_VOL] [-mask_only]
 
  Creates a volumetric dataset that marks the inside
    of the surface.  Voxels in the output dataset are set to the following
  values:
     0: Voxel outside surface
     1: Voxel just outside the surface. This means the voxel
        center is outside the surface but inside the 
        bounding box of a triangle in the mesh. 
     2: Voxel intersects the surface (a triangle), but center lies outside.
     3: Voxel contains a surface node.
     4: Voxel intersects the surface (a triangle), center lies inside surface. 
     5: Voxel just inside the surface. This means the voxel
        center is inside the surface and inside the 
        bounding box of a triangle in the mesh. 
     6: Voxel inside the surface. 

  Mandatory Parameters:
     -i_TYPE SURFACE: Specify input surface.
             You can also use -t* and -spec and -surf
             methods to input surfaces. See below
             for more details.
     -prefix PREFIX: Prefix of output dataset.
     -grid_parent GRID_VOL: Specifies the grid for the
                  output volume.
  Other parameters:
     -mask_only: Produce an output dataset where voxels
                 are 1 inside the surface and 0 outside,
                 instead of the more nuanced output above.

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: 3dSurfMaskDump

3dSurfMaskDump - dump ascii dataset values corresponding to a surface

This program is used to display AFNI dataset values that
correspond to a surface.  The surface points are mapped to xyz
coordinates, according to the SURF_VOL (surface volume) AFNI
dataset.  These coordinates are then matched to voxels in other
AFNI datasets.  So given any other AFNI dataset, this program
can output all of the sub-brick values that correspond to each
of the suface locations.  The user also has options to mask
regions for output.

Different mappings are allowed from the surface(s) to the grid
parent dataset.  The mapping function is a required parameter to
the program.

The current mapping functions are:

    ave       : for each node pair (from 2 surfaces), output the
                average of all voxel values along that line
                segment
    mask      : each node in the surface is mapped to one voxel
    midpoint  : for each node pair (from 2 surfaces), output the
                dataset value at their midpoint (in xyz space)

  usage: 3dSurfMaskDump [options] -spec SPEC_FILE -sv SURF_VOL \
                    -grid_parent AFNI_DSET -map_func MAP_FUNC

  examples:

    3dSurfMaskDump                       \
       -spec         fred.spec                \
       -sv           fred_anat+orig           \
       -grid_parent  fred_anat+orig           \
       -map_func     mask                     \

    3dSurfMaskDump                       \
       -spec         fred.spec                               \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_epi+orig[0]'                       \
       -map_func     mask                                    \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -debug        2                                       \
       -output       fred_surf_vals.txt

    3dSurfMaskDump                       \
       -spec         fred.spec                               \
       -sv           fred_anat+orig                          \
       -grid_parent  fred_anat+orig                          \
       -map_func     ave                                     \
       -m2_steps     10                                      \
       -m2_index     nodes                                   \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -output       fred_surf_ave.txt


  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE        : SUMA spec file

        e.g. -spec fred.spec

        The surface specification file contains the list of
        mappable surfaces that are used.

        See @SUMA_Make_Spec_FS and @SUMA_Make_Spec_SF.

    -sv SURFACE_VOLUME     : AFNI dataset

        e.g. -sv fred_anat+orig

        This is the AFNI dataset that the surface is mapped to.
        This dataset is used for the intial surface node to xyz
        coordinate mapping, in the Dicomm orientation.

    -grid_parent AFNI_DSET : AFNI dataset

        e.g. -grid_parent fred_function+orig

        This dataset is used as a grid and orientation master
        for the output.  Output coordinates are based upon
        this dataset.

    -map_func MAP_FUNC     : surface to dataset function

        e.g. -map_func ave
        e.g. -map_func ave -m2_steps 10
        e.g. -map_func ave -m2_steps 10 -m2_index nodes
        e.g. -map_func mask
        e.g. -map_func midpoint

        Given one or more surfaces, there are many ways to
        select voxel locations, and to select corresponding
        values for the output dataset.  Some of the functions
        will have separate options.

        The current mapping functions are:

          ave      : Given 2 related surfaces, for each node
                     pair, output the average of the dataset
                     values located along the segment joining
                     those nodes.

                  -m2_steps NUM_STEPS :

                     The -m2_steps option may be added here, to
                     specify the number of points to use in the
                     average.  The default and minimum is 2.

                     e.g.  -map_func ave -m2_steps 10
                     default: -m2_steps 2

                  -m2_index TYPE :

                     The -m2_index options is used to specify
                     whether the average is taken by indexing
                     over distict nodes or over distict voxels.

                     For instance, when taking the average along
                     one node pair segment using 10 node steps,
                     perhaps 3 of those nodes may occupy one
                     particular voxel.  In this case, does the
                     user want the voxel counted only once, or 3
                     times?  Each case makes sense.
                     
                     Note that this will only make sense when
                     used along with the '-m2_steps' option.
                     
                     Possible values are "nodes", "voxels".
                     The default value is voxels.  So each voxel
                     along a segment will be counted only once.
                     
                     e.g.  -m2_index nodes
                     e.g.  -m2_index voxels
                     default: -m2_index voxels

          mask     : For each surface xyz location, output the
                     dataset values of each sub-brick.

          midpoint : Given 2 related surfaces, for each node
                     pair, output the dataset value with xyz
                     coordinates at the midpoint of the nodes.

  options:

    -cmask MASK_COMMAND    : (optional) command for dataset mask

        e.g. -cmask '-a fred_func+orig[2] -expr step(a-0.8)'

        This option will produce a mask to be applied to the
        output dataset.  Note that this mask should form a
        single sub-brick.

        This option follows the style of 3dmaskdump (since the
        code for it was, uh, borrowed from there (thanks Bob!)).

        See '3dmaskdump -help' for more information.

    -debug LEVEL           :  (optional) verbose output

        e.g. -debug 2

        This option is used to print out status information 
        during the execution of the program.  Current levels are
        from 0 to 4.

    -help                  : show this help

        If you can't get help here, please get help somewhere.

    -outfile OUTPUT_FILE   : specify a file for the output

        e.g. -outfile some_output_file
        e.g. -outfile mask_values_over_dataset.txt
        e.g. -outfile stderr
        default: write to stdout

        This is where the user will specify which file they want
        the output to be written to.  Note that the output file
        should not yet exist.

        Two special (valid) cases are stdout and stderr, either
        of which may be specified.

    -noscale               : no scale factor in output dataset

        If the output dataset is an integer type (byte, shorts
        or ints), then the output dataset may end up with a
        scale factor attached (see 3dcalc -help).  With this
        option, the output dataset will not be scaled.

    -version               : show version information

        Show version and compile date.


  Author: R. Reynolds  - version 2.3 (July 21, 2003)

                (many thanks to Z. Saad and R.W. Cox)




AFNI program: 3dSynthesize
Usage: 3dSynthesize options
Reads a '-cbucket' dataset and a '.xmat.1D' matrix from 3dDeconvolve,
and synthesizes a fit dataset using selected sub-bricks and
matrix columns.

Options (actually, the first 3 are mandatory)
---------------------------------------------
 -cbucket ccc = Read the dataset 'ccc', which should have been
                 output from 3dDeconvolve via the '-cbucket' option.
 -matrix mmm  = Read the matrix 'mmm', which should have been
                 output from 3dDeconvolve via the '-x1D' option.
 -select sss  = Selects specific columns from the matrix (and the
                 corresponding coefficient sub-bricks from the
                 cbucket).  The string 'sss' can be of the forms:
                   baseline  = All baseline coefficients.
                   polort    = All polynomial baseline coefficients
                               (skipping -stim_base coefficients).
                   allfunc   = All coefficients that are NOT marked
                               (in the -matrix file) as being in
                               the baseline (i.e., all -stim_xxx
                               values except those with -stim_base)
                   allstim   = All -stim_xxx coefficients, including
                               those with -stim_base.
                   all       = All coefficients (should give results
                               equivalent to '3dDeconvolve -fitts').
                   something = All columns/coefficients that match
                               this -stim_label from 3dDeconvolve
                               [to be precise, all columns whose   ]
                               [-stim_label starts with 'something']
                               [will be selected for inclusion.    ]
                   digits    = Columns can also be selected by
                               numbers (starting at 0), or number
                               ranges of the form 3..7 and 3-7.
                               [A string is a number range if it]
                               [comprises only digits and the   ]
                               [characters '.' and/or '-'.      ]
                               [Otherwise, it is used to match  ]
                               [a -stim_label.                  ]
                 More than one '-select sss' option can be used, or
                 you can put more than one string after the '-select',
                 as in this example:
                   3dSynthesize -matrix fred.xmat.1D -cbucket fred+orig \
                                -select baseline FaceStim -prefix FS
                 which synthesizes the baseline and 'FaceStim'
                 responses together, ignoring any other stimuli
                 in the dataset and matrix.
 -dry         = Don't compute the output, just check the inputs.
 -TR dt       = Set TR in the output to 'dt'.  The default value
                 of TR is read from the header of the matrix file.
 -prefix ppp  = Output result into dataset with name 'ppp'.

 -cenfill xxx = Determines how censored time points from the
                 3dDeconvolve run will be filled.  'xxx' is one of:
                   zero    = 0s will be put in at all censored times
                   nbhr    = average of non-censored neighboring times
                   none    = don't put the censored times in at all
                             (in which  case the created  dataset is)
                             (shorter than the input to 3dDeconvolve)
                 If you don't give some -cenfill option, the default
                 operation is 'zero'.  This default is different than
                 previous versions of this program, which did 'none'.
          **N.B.: You might like the program to compute the model fit
                  at the censored times, like it does at all others.
                  This CAN be done if you input the matrix file saved
                  by the '-x1D_uncensored' option in 3dDeconvolve.

NOTES:
-- You could do the same thing in 3dcalc, but this way is simpler
   and faster.  But less flexible, of course.
-- The output dataset is always stored as floats.
-- The -cbucket dataset must have the same number of sub-bricks as
   the input matrix has columns.
-- Each column in the matrix file is a time series, used to model
   some component of the data time series at each voxel.
-- The sub-bricks of the -cbucket dataset give the weighting
   coefficients for these model time series, at each voxel.
-- If you want to calculate a time series dataset wherein the original
   time series data has the baseline subtracted, then you could
   use 3dSynthesize to compute the baseline time series dataset, and
   then use 3dcalc to subtract that dataset from the original dataset.
-- Other similar applications are left to your imagination.
-- To see the column labels stored in matrix file 'fred.xmat.1D', type
   the Unix command 'grep ColumnLabels fred.xmat.1D'; sample output:
 # ColumnLabels = "Run#1Pol#0 ; Run#1Pol#1 ; Run#2Pol#0 ; Run#2Pol#1 ;
                   FaceStim#0 ; FaceStim#1 ; HouseStim#0 ; HouseStim#1"
   which shows the 4 '-polort 1' baseline parameters from 2 separate
   imaging runs, and then 2 parameters each for 'FaceStim' and
   'HouseStim'.
-- The matrix file written by 3dDeconvolve has an XML-ish header
   before the columns of numbers, stored in '#' comment lines.
   If you want to generate your own 'raw' matrix file, without this
   header, you can still use 3dSynthesize, but then you can only use
   numeric '-select' options (or 'all').
-- When using a 'raw' matrix, you'll probably also want the '-TR' option.
-- When putting more than one string after '-select', do NOT combine
   these separate strings togther in quotes.  If you do, they will be
   seen as a single string, which almost surely won't match anything.
-- Author: RWCox -- March 2007

++ Compile date = Mar 13 2009




AFNI program: 3dTSgen
++ 3dTSgen: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program generates an AFNI 3d+time data set.  The time series for 
each voxel is generated according to a user specified signal + noise  
model.                                                              

Usage:                                                                
3dTSgen                                                               
-input fname       fname = filename of prototype 3d + time data file  
[-inTR]            set the TR of the created timeseries to be the TR  
                     of the prototype dataset                         
                     [The default is to compute with TR = 1.]         
                     [The model functions are called for a  ]         
                     [time grid of 0, TR, 2*TR, 3*TR, ....  ]         
-signal slabel     slabel = name of (non-linear) signal model         
-noise  nlabel     nlabel = name of (linear) noise model              
-sconstr k c d     constraints for kth signal parameter:              
                      c <= gs[k] <= d                                 
-nconstr k c d     constraints for kth noise parameter:               
                      c+b[k] <= gn[k] <= d+b[k]                       
-sigma  s          s = std. dev. of additive Gaussian noise           
[-voxel num]       screen output for voxel #num                       
-output fname      fname = filename of output 3d + time data file     
                                                                      
                                                                      
The following commands generate individual AFNI 1 sub-brick datasets: 
                                                                      
[-scoef k fname]   write kth signal parameter gs[k];                  
                     output 'fim' is written to prefix filename fname 
[-ncoef k fname]   write kth noise parameter gn[k];                   
                     output 'fim' is written to prefix filename fname 
                                                                      
                                                                      
The following commands generate one AFNI 'bucket' type dataset:       
                                                                      
[-bucket n prefixname]   create one AFNI 'bucket' dataset containing  
                           n sub-bricks; n=0 creates default output;  
                           output 'bucket' is written to prefixname   
The mth sub-brick will contain:                                       
[-brick m scoef k label]   kth signal parameter regression coefficient
[-brick m ncoef k label]   kth noise parameter regression coefficient 

++ Compile date = Mar 13 2009




AFNI program: 3dTagalign
Usage: 3dTagalign [options] dset
Rotates/translates dataset 'dset' to be aligned with the master,
using the tagsets embedded in their .HEAD files.

Options:
 -master mset  = Use dataset 'mset' as the master dataset
                   [this is a nonoptional option]

 -nokeeptags   = Don't put transformed locations of dset's tags
                   into the output dataset [default = keep tags]

 -matvec mfile = Write the matrix+vector of the transformation to
                   file 'mfile'.  This can be used as input to the
                   '-matvec_in2out' option of 3dWarp, if you want
                   to align other datasets in the same way (e.g.,
                   functional datasets).

 -rotate       = Compute the best transformation as a rotation + shift.
                   This is the default.

 -affine       = Compute the best transformation as a general affine
                   map rather than just a rotation + shift.  In all
                   cases, the transformation from input to output
                   coordinates is of the form
                      [out] = [R] [in] + [V]
                   where [R] is a 3x3 matrix and [V] is a 3-vector.
                   By default, [R] is computed as a proper (det=1)
                   rotation matrix (3 parameters).  The '-affine'
                   option says to fit [R] as a general matrix
                   (9 parameters).
           N.B.: An affine transformation can rotate, rescale, and
                   shear the volume.  Be sure to look at the dataset
                   before and after to make sure things are OK.

 -rotscl       = Compute transformation as a rotation times an isotropic
                   scaling; that is, [R] is an orthogonal matrix times
                   a scalar.
           N.B.: '-affine' and '-rotscl' do unweighted least squares.

 -prefix pp    = Use 'pp' as the prefix for the output dataset.
                   [default = 'tagalign']
 -verb         = Print progress reports
 -dummy        = Don't actually rotate the dataset, just compute
                   the transformation matrix and vector.  If
                   '-matvec' is used, the mfile will be written.

Nota Bene:
* Cubic interpolation is used.  The transformation is carried out
  using the same methods as program 3dWarp.

Author: RWCox - 16 Jul 2000, etc.

++ Compile date = Mar 13 2009




AFNI program: 3dTcat
Concatenate sub-bricks from input datasets into one big 3D+time dataset.
Usage: 3dTcat options
where the options are:
     -prefix pname = Use 'pname' for the output dataset prefix name.
 OR  -output pname     [default='tcat']

     -session dir  = Use 'dir' for the output dataset session directory.
                       [default='./'=current working directory]
     -glueto fname = Append bricks to the end of the 'fname' dataset.
                       This command is an alternative to the -prefix 
                       and -session commands.                        
     -dry          = Execute a 'dry run'; that is, only print out
                       what would be done.  This is useful when
                       combining sub-bricks from multiple inputs.
     -verb         = Print out some verbose output as the program
                       proceeds (-dry implies -verb).
                       Using -verb twice results in quite lengthy output.
     -rlt          = Remove linear trends in each voxel time series loaded
                       from each input dataset, SEPARATELY.  That is, the
                       data from each dataset is detrended separately.
                       At least 3 sub-bricks from a dataset must be input
                       for this option to apply.
             Notes: (1) -rlt removes the least squares fit of 'a+b*t'
                          to each voxel time series; this means that
                          the mean is removed as well as the trend.
                          This effect makes it impractical to compute
                          the % Change using AFNI's internal FIM.
                    (2) To have the mean of each dataset time series added
                          back in, use this option in the form '-rlt+'.
                          In this case, only the slope 'b*t' is removed.
                    (3) To have the overall mean of all dataset time
                          series added back in, use this option in the
                          form '-rlt++'.  In this case, 'a+b*t' is removed
                          from each input dataset separately, and the
                          mean of all input datasets is added back in at
                          the end.  (This option will work properly only
                          if all input datasets use at least 3 sub-bricks!)
                    (4) -rlt can be used on datasets that contain shorts
                          or floats, but not on complex- or byte-valued
                          datasets.

Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.

SUB-BRICK SELECTION:
You can also add a sub-brick selection list after the end of the
dataset name.  This allows only a subset of the sub-bricks to be
included into the output (by default, all of the input dataset
is copied into the output).  A sub-brick selection list looks like
one of the following forms:
  fred+orig[5]                     ==> use only sub-brick #5
  fred+orig[5,9,17]                ==> use #5, #9, and #17
  fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
  fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  fred+orig[0..$(3)]

You can also use a syntax based on the usage of the program count.
This would be most useful when randomizing (shuffling) the order of
the sub-bricks. Example:
  fred+orig[count -seed 2 5 11 s] is equivalent to something like:
  fred+orig[ 6, 5, 11, 10, 9, 8, 7] 
You could also do: fred+orig[`count -seed 2 -digits 1 -suffix ',' 5 11 s`]
but if you have lots of numbers, the command line would get too
long for the shell to process it properly. Omit the seed option if
you want the code to generate a seed automatically.
You cannot mix and match count syntax with other selection gimmicks.

NOTES:
* The TR and other time-axis properties are taken from the
  first input dataset that is itself 3D+time.  If no input
  datasets contain such information, then TR is set to 1.0.
  This can be altered later using the 3drefit program.

* The sub-bricks are output in the order specified, which may
  not be the order in the original datasets.  For example, using
     fred+orig[0..$(2),1..$(2)]
  will cause the sub-bricks in fred+orig to be output into the
  new dataset in an interleaved fashion.  Using
     fred+orig[$..0]
  will reverse the order of the sub-bricks in the output.
  If the -rlt option is used, the sub-bricks selected from each
  input dataset will be re-ordered into the output dataset, and
  then this sequence will be detrended.

* You can use the '3dinfo' program to see how many sub-bricks
  a 3D+time or a bucket dataset contains.

* The '$', '(', ')', '[', and ']' characters are special to
  the shell, so you will have to escape them.  This is most easily
  done by putting the entire dataset plus selection list inside
  single quotes, as in 'fred+orig[5..7,9]'.

* You may wish/need to use the 3drefit program on the output
  dataset to modify some of the .HEAD file parameters.

++ Compile date = Mar 13 2009




AFNI program: 3dTcorrMap
Usage: 3dTcorrSum [options]
For each voxel, computes the correlation between it and all
other voxels, and averages these into the output.  Supposed
to give a measure of how 'connected' each voxel is to the
rest of the brain.  (As if life were that simple.)

Options:
  -input dd = Read 3D+time dataset 'dd' (a mandatory option).

  -Mean pp  = Save average correlations into dataset prefix 'pp'
  -Zmean pp = Save tanh of mean arctanh(correlation) into 'pp'
  -Qmean pp = Save RMS(correlation) into 'pp'
              (at least one of these output options must be given)

  -polort m = Remove polynomical trend of order 'm', for m=-1..19.
               [default is m=1; removal is by least squares].
               Using m=-1 means no detrending; this is only useful
               for data/information that has been pre-processed.
  -ort rr   = 1D file with other time series to be removed
               (via least squares regression) before correlation.

  -mask mm  = Read dataset 'mm' as a voxel mask.
  -automask = Create a mask from the input dataset.

-- This purely experimental program is somewhat slow.
-- For Kyle, AKA the new Pat.
-- RWCox - August 2008.

++ Compile date = Mar 13 2009




AFNI program: 3dTcorrelate
Usage: 3dTcorrelate [options] xset yset
Computes the correlation coefficient between corresponding voxel
time series in two input 3D+time datasets 'xset' and 'yset', and
stores the output in a new 1 sub-brick dataset.

Options:
  -pearson  = Correlation is the normal Pearson (product moment)
                correlation coefficient [default].
  -spearman = Correlation is the Spearman (rank) correlation
                coefficient.
  -quadrant = Correlation is the quadrant correlation coefficient.

  -polort m = Remove polynomical trend of order 'm', for m=-1..3.
                [default is m=1; removal is by least squares].
                Using m=-1 means no detrending; this is only useful
                for data/information that has been pre-processed.

  -ort r.1D = Also detrend using the columns of the 1D file 'r.1D'.
                Only one -ort option can be given.  If you want to use
                more than one, create a temporary file using 1dcat.

  -autoclip = Clip off low-intensity regions in the two datasets,
  -automask =  so that the correlation is only computed between
               high-intensity (presumably brain) voxels.  The
               intensity level is determined the same way that
               3dClipLevel works.

  -prefix p = Save output into dataset with prefix 'p'
               [default prefix is 'Tcorr'].

Notes:
 * The output dataset is functional bucket type, with one
    sub-brick, stored in floating point format.
 * Because both time series are detrended prior to correlation,
    the results will not be identical to using FIM or FIM+ to
    calculate correlations (whose ideal vector is not detrended).
 * This is a quick hack for Mike Beauchamp.  Thanks for you-know-what.

-- RWCox - Aug 2001

++ Compile date = Mar 13 2009




AFNI program: 3dTfitter
Usage: 3dTfitter [options]
* At each voxel, assembles and solves a set of linear equations.
* Output is a bucket dataset with the parameters at each voxel.
* Can also get output of fitted time series at each voxel.
* Can also deconvolve with a known kernel function (e.g., HRF model),
  in which case the output dataset is a new time series dataset.

--------
Options:
--------
  -RHS rset = Specifies the right-hand-side 3D+time dataset.
                ('rset' can also be a 1D file with 1 column)
             * Exactly one '-RHS' option must be given to 3dTfitter.

  -LHS lset = Specifies a column (or columns) of the left-hand-side matrix.
             * More than one 'lset' can follow the '-LHS' option, but each
               input filename must NOT start with the '-' character!
             * Or you can use multiple '-LHS' options, if you prefer.
             * Each 'lset' can be a 3D+time dataset, or a 1D file
               with 1 or more columns.
             * A 3D+time dataset defines one column in the LHS matrix.
              ++ If 'rset' is a 1D file, then you cannot input a 3D+time
                 dataset with '-LHS'.
              ++ If 'rset' is a 3D+time dataset, then the 3D+time dataset
                 input with '-LHS' must have the same voxel grid as 'rset'.
             * A 1D file defines as many columns in the LHS matrix as
               are in the file.
              ++ For example, you could input the LHS matrix from the
                 .xmat.1D file output by 3dDeconvolve, if you wanted
                 to repeat the same linear regression using 3dTfitter,
                 for some bizarre unfathomable twisted reason.
            ** If some LHS vector is very small (less than a factor of 0.000333)
               compared to the largest LHS vector, then it will be ignored
               in the fitting.  This feature allows the case where some LHS
               dataset voxels are all zero.  [Per Rasmus Birn et al.]
           *** Columns are assembled in the order given on the command line,
               which means that LHS parameters will be output in that order!
           *** If all LHS inputs are 1D vectors AND you are using least
               squares fitting without constraints, then 3dDeconvolve would
               be more efficient, since each voxel would have the same set
               of equations -- a fact that 3dDeconvolve exploits for speed.
              ++ But who cares about CPU time?  Burn, baby, burn!

  -polort p = Add 'p+1' Legendre polynomial columns to the LHS matrix.
             * These columns are added to the LHS matrix AFTER all other
               columns specified by the '-LHS' option, even if the '-polort'
               option appears before '-LHS' on the command line.
             * By default, NO polynomial columns will be used.

  -label lb = Specifies a sub-brick label for the output LHS parameter dataset.
             * More than one 'lb' can follow the '-label' option;
               however, each label must NOT start with the '-' character!
             * Labels are applied in the order given.
             * Normally, you would provide exactly as many labels as
               LHS columns.  If not, the program invents some labels.

  -FALTUNG fset fpre pen fac
            = Specifies a convolution (German: Faltung) model to be
              added to the LHS matrix.  Four arguments follow the option:
         -->** 'fset' is a 3D+time dataset or a 1D file that specifies
               the known kernel of the convolution.
             * fset's time point [0] is the 0-lag point in the kernel,
               [1] is the 1-lag into the past point, etc.
              ++ Call the data z(t), the unknown signal s(t), and the
                 known kernel h(t).  The equations being solved for
                 the set of all s(t) values are of the form
                   z(t) = h(0)s(t) + h(1)s(t-1) + ... + h(L)s(t-L) + noise
                 where L is the last value in the kernel function.
            ++++ N.B.: The TR of 'fset' and the TR of the RHS dataset
                       MUST be the same, or the deconvolution results
                       will be meaningless drivel!
         -->** 'fpre' is the prefix for the output time series to
               be created -- it will have the same length as the
               input 'rset' time series.
              ++ If you don't want this time series (why?), set 'fpre'
                 to be the string 'NULL'.
         -->** 'pen' selects the type of penalty function to be
               applied to constrain the deconvolved time series:
              ++ The following penalty functions are available:
                   P0[s] = f^q * sum{ |s(t)|^q }
                   P1[s] = f^q * sum{ |s(t)-s(t-1)|^q }
                   P2[s] = f^q * sum{ |s(t)-0.5*s(t-1)-0.5*s(t+1)|^q }
                 where s(t) is the deconvolved time series;
                 where q=1 for L1 fitting, q=2 for L2 fitting;
                 where f is the value of 'fac' (defined below).
                   P0 tries to keep s(t) itself small
                   P1 tries to keep point-to-point fluctuations
                      in s(t) small (1st derivative)
                   P2 tries to keep 3 point fluctuations
                      in s(t) small (2nd derivative)
              ++ In L2 regression, these penalties are like Wiener
                 deconvolution with noise spectra proportional to
                   P0 ==> f^2 (constant in frequency)
                   P1 ==> f^2 * freq^2
                   P2 ==> f^2 * freq^4
                 However, 3dTfitter does deconvolution in the time
                 domain, not the frequency domain, and you can choose
                 to use L2 or L1 regression.
              ++ The value of 'pen' is one of the following 7 cases:
                     0 = use P0 only
                     1 = use P1 only
                     2 = use P2 only
                    01 = use P0+P1 (the sum of these two functions)
                    02 = use P0+P2
                    12 = use P1+P2
                   012 = use P0+P1+P2 (sum of three penalty functions)
                 If 'pen' does not contain any of the digits 0, 1, or 2,
                 then '01' will be used.
         -->** 'fac' is the positive weight for the penalty function:
              ++ if fac < 0, then the program chooses a penalty factor
                 for each voxel separately and then scales that by -fac.
              ++ use fac = -1 to get this voxel-dependent factor unscaled.
              ++ fac = 0 is a special case: the program chooses a range
                 of penalty factors, does the deconvolution regression
                 for each one, and then chooses the fit it likes best
                 (as a tradeoff between fit error and solution size).
              ++ fac = 0 will be MUCH slower since it solves about 20
                 problems for each voxel and then chooses what it likes.
                 setenv AFNI_TFITTER_VERBOSE YES to get some progress
                 reports, if you want to see what it is doing.
              ++ SOME penalty has to be applied, since otherwise the
                 set of linear equations for s(t) is under-determined
                 and/or ill-conditioned!
            ** If '-LHS' is also used, those basis vectors can be
               thought of as a baseline to be regressed out at the
               same time the convolution model is fitted.
              ++ When '-LHS' supplies a baseline, it is important
                 that penalty type 'pen' include '0', so that the
                 collinearity between convolution with a constant s(t)
                 and a constant baseline can be resolved!
              ++ Instead of using a baseline here, you could project the
                 baseline out of a dataset or 1D file using 3dDetrend,
                 before using 3dTfitter.
           *** At most one '-FALTUNG' option can be used!!!
           *** Consider the time series model
                 Z(t) = K(t)*S(t) + baseline + noise,
               where Z(t) = data time series (in each voxel)
                     K(t) = kernel (e.g., hemodynamic response function)
                     S(t) = stimulus time series
                 baseline = constant, drift, etc.
                    and * = convolution in time
               Then 3dDeconvolve solves for K(t) given S(t), and 3dTfitter
               solves for S(t) given K(t).  The difference between the two
               cases is that K(t) is presumed to be causal and have limited
               support, whereas S(t) is a full-length time series.
        ****** Deconvolution is a tricky business, so be careful out there!
              ++ e.g., Experiment with the different parameters to make
                 sure the results in your type of problems make sense.
              ++ There is no guarantee that the automatic selection of
                 of the penalty factor will give usable results for
                 your problem!
              ++ You should probably use a mask dataset with -FALTUNG,
                 since deconvolution can often fail on pure noise
                 time series.
              ++ Unconstrained (no '-cons' options) least squares ('-lsqfit')
                 is normally the fastest solution method for deconvolution.
                 This, however, may only matter if you have a very long input
                 time series dataset (e.g., more than 1000 time points).
              ++ For unconstrained least squares deconvolution, a special
                 sparse matrix algorithm is used for speed.  If you wish to
                 disable this for some reason, set environment variable
                 AFNI_FITTER_RCMAT to NO before running the program.
              ++ Nevertheless, a problem with more than 1000 time points
                 will probably take a LONG time to run, especially if
                 'fac' is chosen to be 0.

  -lsqfit   = Solve equations via least squares [the default method].
             * '-l2fit' is a synonym for this option
             * This is sometimes called L2 regression by mathematicians.

  -l1fit    = Solve equations via least sum of absolute residuals.
             * This is sometimes called L1 regression by mathematicians.
             * L1 fitting is usually slower than L2 fitting, but
               is perhaps less sensitive to outliers in the data.
              ++ L1 deconvolution might give nicer looking results
                 when you expect the deconvolved signal s(t) to
                 have large-ish sections where s(t) = 0.
             * L2 fitting is statistically more efficient when the
               noise is known to be normally (Gaussian) distributed.

  -consign  = Follow this option with a list of LHS parameter indexes
              to indicate that the sign of some output LHS parameters
              should be constrained in the solution; for example:
                 -consign +1 -3
              which indicates that LHS parameter #1 (from the first -LHS)
              must be non-negative, and that parameter #3 must be
              non-positive.  Parameter #2 is unconstrained (e.g., the
              output can be positive or negative).
             * Parameter counting starts with 1, and corresponds to
               the order in which the LHS columns are specified.
             * Unlike '-LHS or '-label', only one '-consign' option
               can be used.
             * Do NOT give the same index more than once after
               '-consign' -- you can't specify that an coefficient
               is both non-negative and non-positive, for example!
           *** Constraints can be used with '-l1fit' AND with '-l2fit'.
           *** '-consign' constraints only apply to the '-LHS'
               fit parameters.  To constrain the '-FALTUNG' output,
               use the option below.
             * If '-consign' is not used, the signs of the fitted
               LHS parameters are not constrained.

  -consFAL c= Constrain the deconvolution time series from '-FALTUNG'
              to be positive if 'c' is '+' or to be negative if
              'c' is '-'.
             * There is no way at present to constrain the deconvolved
               time series s(t) to be positive in some regions and
               negative in others.
             * If '-consFAL' is not used, the sign of the deconvolved
               time series is not constrained.

  -prefix p = Prefix for the output dataset (LHS parameters) filename.
             * Output datasets from 3dTfitter are always in float format.
             * If you don't give this option, 'Tfitter' is the prefix.
             * If you don't want this dataset, use 'NULL' as the prefix.
             * If you are doing deconvolution and do not also give any
               '-LHS' option, then this file will not be output, since
               it comprises the fit parameters for the '-LHS' vectors.
           *** If the input '-RHS' file is a .1D file, normally the
               output files are written in the AFNI .3D ASCII format,
               where each row contains the time series data for one
               voxel.  If you want to have these files written in the
               .1D format, with time represented down the column
               direction, be sure to put '.1D' on the end of the prefix,
               as in '-prefix Elvis.1D'.  If you use '-' or 'stdout' as
               the prefix, the resulting 1D file will be written to the
               terminal.

  -fitts ff = Prefix filename for the output fitted time series dataset.
             * Which is always in float format.
             * Which will not be written if this option isn't given!
           *** If you want the residuals, subtract this time series
               from the '-RHS' input using 3dcalc (or 1deval).

  -mask ms  = Read in dataset 'ms' as a mask; only voxels with nonzero
              values in the mask will be processed.  Voxels falling
              outside the mask will be set to all zeros in the output.
             * Voxels whose time series are all zeros will not be
               processed, even if they are inside the mask.

  -quiet    = Don't print the fun fun fun progress report messages.
             * Why would you want to hide these delightful missives?

----------------------
ENVIRONMENT VARIABLES:
----------------------
 AFNI_TFITTER_VERBOSE  =  YES means to print out information during
                          the fitting calculations.
                         ++ Automatically turned on for 1 voxel -RHS inputs.
 AFNI_TFITTER_P1SCALE  =  number > 0 will scale the P1 penalty by
                          this value (e.g., to count it more)
 AFNI_TFITTER_P2SCALE  =  number > 0 will scale the P2 penalty by
                          this value

------------
NON-Options:
------------
* There is no option to produce statistical estimates of the
  significance of the parameter estimates.
  ++ 3dTcorrelate might be useful, to compute the correlation
     between the '-fitts' time series and the '-RHS' input data.
* There are no options for censoring or baseline generation.
  ++ You could generate some baseline 1D files using 1deval, perhaps.
* There is no option to constrain the range of the output parameters,
  except the semi-infinite ranges provided by '-consign' and/or '-consFAL'.
* The '-jobs N' option, to use multiple CPUs as in 3dDeconvolve,
  is not available at the present chronosynclastic infundibulum.

------------------
Contrived Example:
------------------
The dataset 'atm' and 'btm' are assumed to have 99 time points each.
We use 3dcalc to create a synthetic combination of these plus a constant
plus Gaussian noise, then use 3dTfitter to fit the weights of these
3 functions to each voxel, using 4 different methods.  Note the use of
the input 1D time series '1D: 99@1' to provide the constant term.

 3dcalc -a atm+orig -b btm+orig -expr '-2*a+b+gran(100,20)' -prefix 21 -float
 3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F2u -l2fit
 3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F1u -l1fit
 3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F1c -l1fit \
           -consign -1 +3
 3dTfitter -RHS 21+orig -LHS atm+orig btm+orig '1D: 99@1' -prefix F2c -l2fit \
           -consign -1 +3

In the absence of noise and error, the output datasets should be
  #0 sub-brick = -2.0 in all voxels
  #1 sub-brick = +1.0 in all voxels
  #2 sub-brick = +100.0 in all voxels

---------------------
Yet More Contrivance:
---------------------
You can input a 1D file for the RHS dataset, as in the example below,
to fit a single time series to a weighted sum of other time series:

 1deval -num 30 -expr 'cos(t)' > Fcos.1D
 1deval -num 30 -expr 'sin(t)' > Fsin.1D
 1deval -num 30 -expr 'cos(t)*exp(-t/20)' > Fexp.1D
 3dTfitter -quiet -RHS Fexp.1D -LHS Fcos.1D Fsin.1D -prefix -

* Note the use of the '-' as a prefix to write the results
  (just 2 numbers) to stdout, and the use of '-quiet' to hide
  the divertingly funny and informative progress messages.
* For the Jedi AFNI Masters out there, the above example can be carried
  out on using single complicated command line:

 3dTfitter -quiet -RHS `1deval -1D: -num 30 -expr 'cos(t)*exp(-t/20)'` \
                  -LHS `1deval -1D: -num 30 -expr 'cos(t)'`            \
                       `1deval -1D: -num 30 -expr 'sin(t)'`            \
                  -prefix - 

  resulting in the single output line below:

 0.535479 0.000236338

  which are respectively the fit coefficients of 'cos(t)' and 'sin(t)'.

--------------------------------
Contrived Deconvolution Example:
--------------------------------
(1) Create a 101 point 1D file that is a block of 'activation'
    between points 40..50, convolved with a triangle wave kernel
    (the '-iresp' input below):
       3dConvolve -input1D -polort -1 -num_stimts 1     \
                  -stim_file 1 '1D: 40@0 10@1 950@0'    \
                  -stim_minlag 1 0 -stim_maxlag 1 5     \
                  -iresp 1 '1D: 0 1 2 3 2 1' -nlast 100 \
            | grep -v Result | grep -v '^$' > F101.1D

(2) Create a 3D+time dataset with this time series in each
    voxel, plus noise that increases with voxel 'i' index:
       3dUndump -prefix Fjunk -dimen 100 100 1
       3dcalc -a Fjunk+orig -b F101.1D     \
              -expr 'b+gran(0,0.04*(i+1))' \
              -float -prefix F101d
       /bin/rm -f Fjunk+orig.*

(3) Deconvolve, then look what you get by running AFNI:
       3dTfitter -RHS F101d+orig -l1fit \
                 -FALTUNG '1D: 0 1 2 3 2 1' F101d_fal1 012 0.0
       3dTfitter -RHS F101d+orig -l2fit \
                 -FALTUNG '1D: 0 1 2 3 2 1' F101d_fal2 012 0.0

(4) View F101d_fal1+orig, F101d_fal2+orig, and F101d+orig in AFNI,
    (in Axial image and graph viewers) and see how the fit quality
    varies with the noise level and the regression type -- L1 or
    L2 regression.  Note that the default 'fac' level of 0.0 was
    selected in the commands above, which means the program selects
    the penalty factor for each voxel, based on the size of the
    data time series fluctuations.

(5) Add logistic noise to the noise-free 1D time series, then deconvolve
    and plot the results directly to the screen, with L1 and L2 fitting:
      1deval -a F101.1D -expr 'a+lran(.5)' > F101n.1D
      3dTfitter -RHS F101n.1D -l1fit \
                -FALTUNG '1D: 0 1 2 3 2 1' stdout 01 0.0 | 1dplot -stdin &
      3dTfitter -RHS F101n.1D -l2fit \
                -FALTUNG '1D: 0 1 2 3 2 1' stdout 01 0.0 | 1dplot -stdin &
  **N.B.: you can only use 'stdout' as an output filename
          when the output will be written as a 1D file!

************************************************************************
** RWCox - Feb 2008.                                                  **
** Created for the imperial purposes of John A Butman, MD PhD.        **
** But may be useful for some other well-meaning souls out there.     **
************************************************************************

++ Compile date = Mar 13 2009




AFNI program: 3dThreetoRGB
Usage #1: 3dThreetoRGB [options] dataset
Usage #2: 3dThreetoRGB [options] dataset1 dataset2 dataset3

Converts 3 sub-bricks of input to an RGB-valued dataset.
* If you have 1 input dataset, then sub-bricks [0..2] are
   used to form the RGB components of the output.
* If you have 3 input datasets, then the [0] sub-brick of
   each is used to form the RGB components, respectively.
* RGB datasets have 3 bytes per voxel, with values ranging
   from 0..255.

Options:
  -prefix ppp = Write output into dataset with prefix 'ppp'.
                 [default='rgb']
  -scale fac  = Multiply input values by 'fac' before using
                 as RGB [default=1].  If you have floating
                 point inputs in range 0..1, then using
                 '-scale 255' would make a lot of sense.
  -mask mset  = Only output nonzero values where the mask
                 dataset 'mset' is nonzero.
  -fim        = Write result as a 'fim' type dataset.
                 [this is the default]
  -anat       = Write result as a anatomical type dataset.
Notes:
* Input datasets must be byte-, short-, or float-valued.
* You might calculate the component datasets using 3dcalc.
* You can also create RGB-valued datasets in to3d, using
   2D raw PPM image files as input, or the 3Dr: format.
* RGB fim overlays are transparent in AFNI in voxels where all
   3 bytes are zero - that is, it won't overlay solid black.
* At present, there is limited support for RGB datasets.
   About the only thing you can do is display them in 2D
   slice windows in AFNI.

-- RWCox - April 2002

++ Compile date = Mar 13 2009




AFNI program: 3dToutcount
Usage: 3dToutcount [options] dataset
Calculates number of 'outliers' a 3D+time dataset, at each
time point, and writes the results to stdout.

Options:
 -mask mset = Only count voxels in the mask dataset.
 -qthr q    = Use 'q' instead of 0.001 in the calculation
                of alpha (below): 0 < q < 1.

 -autoclip }= Clip off 'small' voxels (as in 3dClipLevel);
 -automask }=   you can't use this with -mask!

 -range     = Print out median+3.5*MAD of outlier count with
                each time point; use with 1dplot as in
                3dToutcount -range fred+orig | 1dplot -stdin -one
 -save ppp  = Make a new dataset, and save the outlier Q in each
                voxel, where Q is calculated from voxel value v by
                Q = -log10(qg(abs((v-median)/(sqrt(PI/2)*MAD))))
             or Q = 0 if v is 'close' to the median (not an outlier).
                That is, 10**(-Q) is roughly the p-value of value v
                under the hypothesis that the v's are iid normal.
              The prefix of the new dataset (float format) is 'ppp'.

 -polort nn = Detrend each voxel time series with polynomials of
                order 'nn' prior to outlier estimation.  Default
                value of nn=0, which means just remove the median.
                Detrending is done with L1 regression, not L2.

OUTLIERS are defined as follows:
 * The trend and MAD of each time series are calculated.
   - MAD = median absolute deviation
         = median absolute value of time series minus trend.
 * In each time series, points that are 'far away' from the
    trend are called outliers, where 'far' is defined by
      alpha * sqrt(PI/2) * MAD
      alpha = qginv(0.001/N) (inverse of reversed Gaussian CDF)
      N     = length of time series
 * Some outliers are to be expected, but if a large fraction of the
    voxels in a volume are called outliers, you should investigate
    the dataset more fully.

Since the results are written to stdout, you probably want to redirect
them to a file or another program, as in this example:
  3dToutcount -automask v1+orig | 1dplot -stdin

NOTE: also see program 3dTqual for a similar quality check.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dTqual
Usage: 3dTqual [options] dataset
Computes a `quality index' for each sub-brick in a 3D+time dataset.
The output is a 1D time series with the index for each sub-brick.
The results are written to stdout.

Note that small values of the index are 'good', indicating that
the sub-brick is not very different from the norm.  The purpose
of this program is to provide a crude way of screening FMRI
time series for sporadic abnormal images, such as might be
caused by large subject head motion or scanner glitches.

Do not take the results of this program too literally.  It
is intended as a GUIDE to help you find data problems, and no
more.  It is not an assurance that the dataset is good, and
it may indicate problems where nothing is wrong.

Sub-bricks with index values much higher than others should be
examined for problems.  How you determine what 'much higher' means
is mostly up to you.  I suggest graphical inspection of the indexes
(cf. EXAMPLE, infra).  As a guide, the program will print (stderr)
the median quality index and the range median-3.5*MAD .. median+3.5*MAD
(MAD=Median Absolute Deviation).  Values well outside this range might
be considered suspect; if the quality index were normally distributed,
then values outside this range would occur only about 1% of the time.

OPTIONS:
  -spearman = Quality index is 1 minus the Spearman (rank)
               correlation coefficient of each sub-brick
               with the median sub-brick.
               [This is the default method.]
  -quadrant = Similar to -spearman, but using 1 minus the
               quadrant correlation coefficient as the
               quality index.

  -autoclip = Clip off low-intensity regions in the median sub-brick,
  -automask =  so that the correlation is only computed between
               high-intensity (presumably brain) voxels.  The
               intensity level is determined the same way that
               3dClipLevel works.  This prevents the vast number
               of nearly 0 voxels outside the brain from biasing
               the correlation coefficient calculations.

  -clip val = Clip off values below 'val' in the median sub-brick.

  -range    = Print the median-3.5*MAD and median+3.5*MAD values
               out with EACH quality index, so that they
               can be plotted (cf. Example, infra).
     Notes: * These values are printed to stderr in any case.
            * This is only useful for plotting with 1dplot.
            * The lower value median-3.5*MAD is never allowed
                to go below 0.

EXAMPLE:
   3dTqual -range -automask fred+orig | 1dplot -one -stdin
will calculate the time series of quality indexes and plot them
to an X11 window, along with the median+/-3.5*MAD bands.

NOTE: cf. program 3dToutcount for a somewhat different quality check.

-- RWCox - Aug 2001

++ Compile date = Mar 13 2009




AFNI program: 3dTshift
Usage: 3dTshift [options] dataset
Shifts voxel time series from the input dataset so that the separate
slices are aligned to the same temporal origin.  By default, uses the
slicewise shifting information in the dataset header (from the 'tpattern'
input to program to3d).

Method:  detrend -> interpolate -> retrend (optionally)

The input dataset can have a sub-brick selector attached, as documented
in '3dcalc -help'.

The output dataset time series will be interpolated from the input to
the new temporal grid.  This may not be the best way to analyze your
data, but it can be convenient.

Warnings:
* Please recall the phenomenon of 'aliasing': frequencies above 1/(2*TR) can't
  be properly interpolated.  For most 3D FMRI data, this means that cardiac
  and respiratory effects will not be treated properly by this program.

* The images at the beginning of a high-speed FMRI imaging run are usually
  of a different quality than the later images, due to transient effects
  before the longitudinal magnetization settles into a steady-state value.
  These images should not be included in the interpolation!  For example,
  if you wish to exclude the first 4 images, then the input dataset should
  be specified in the form 'prefix+orig[4..$]'.  Alternatively, you can
  use the '-ignore ii' option.

* It seems to be best to use 3dTshift before using 3dvolreg.

Options:
  -verbose      = print lots of messages while program runs

  -TR ddd       = use 'ddd' as the TR, rather than the value
                  stored in the dataset header using to3d.
                  You may attach the suffix 's' for seconds,
                  or 'ms' for milliseconds.

  -tzero zzz    = align each slice to time offset 'zzz';
                  the value of 'zzz' must be between the
                  minimum and maximum slice temporal offsets.
            N.B.: The default alignment time is the average
                  of the 'tpattern' values (either from the
                  dataset header or from the -tpattern option)

  -slice nnn    = align each slice to the time offset of slice
                  number 'nnn' - only one of the -tzero and
                  -slice options can be used.

  -prefix ppp   = use 'ppp' for the prefix of the output file;
                  the default is 'tshift'.

  -ignore ii    = Ignore the first 'ii' points. (Default is ii=0.)
                  The first ii values will be unchanged in the output
                  (regardless of the -rlt option).  They also will
                  not be used in the detrending or time shifting.

  -rlt          = Before shifting, the mean and linear trend
  -rlt+         = of each time series is removed.  The default
                  action is to add these back in after shifting.
                  -rlt  means to leave both of these out of the output
                  -rlt+ means to add only the mean back into the output
                  (cf. '3dTcat -help')

  -no_detrend   = Do not remove or restore linear trend.
                  Heptic becomes the default interpolation method.

  -Fourier = Use a Fourier method (the default: most accurate; slowest).
  -linear  = Use linear (1st order polynomial) interpolation (least accurate).
  -cubic   = Use the cubic (3rd order) Lagrange polynomial interpolation.
  -quintic = Use the quintic (5th order) Lagrange polynomial interpolation.
  -heptic  = Use the heptic (7th order) Lagrange polynomial interpolation.

  -tpattern ttt = use 'ttt' as the slice time pattern, rather
                  than the pattern in the input dataset header;
                  'ttt' can have any of the values that would
                  go in the 'tpattern' input to to3d, described below:

   alt+z = altplus   = alternating in the plus direction
   alt+z2            = alternating, starting at slice #1 instead of #0
   alt-z = altminus  = alternating in the minus direction
   alt-z2            = alternating, starting at slice #nz-2 instead of #nz-1
   seq+z = seqplus   = sequential in the plus direction
   seq-z = seqminus  = sequential in the minus direction
   @filename         = read temporal offsets from 'filename'

  For example if nz = 5 and TR = 1000, then the inter-slice
  time is taken to be dt = TR/nz = 200.  In this case, the
  slices are offset in time by the following amounts:

             S L I C E   N U M B E R
   tpattern    0   1   2   3   4   Comment
   --------- --- --- --- --- ---   -------------------------------
   altplus     0 600 200 800 400   Alternating in the +z direction
   alt+z2    400   0 600 200 800   Alternating, but starting at #1
   altminus  400 800 200 600   0   Alternating in the -z direction
   alt-z2    800 200 600   0 400   Alternating, starting at #nz-2 
   seqplus     0 200 400 600 800   Sequential  in the +z direction
   seqminus  800 600 400 200   0   Sequential  in the -z direction

  If @filename is used for tpattern, then nz ASCII-formatted numbers
  are read from the file.  These indicate the time offsets for each
  slice. For example, if 'filename' contains
     0 600 200 800 400
  then this is equivalent to 'altplus' in the above example.
  (nz = number of slices in the input dataset)

N.B.: if you are using -tpattern, make sure that the units supplied
      match the units of TR in the dataset header, or provide a
      new TR using the -TR option.

As a test of how well 3dTshift interpolates, you can take a dataset
that was created with '-tpattern alt+z', run 3dTshift on it, and
then run 3dTshift on the new dataset with '-tpattern alt-z' -- the
effect will be to reshift the dataset back to the original time
grid.  Comparing the original dataset to the shifted-then-reshifted
output will show where 3dTshift does a good job and where it does
a bad job.

-- RWCox - 31 October 1999

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dTsmooth
Usage: 3dTsmooth [options] dataset
Smooths each voxel time series in a 3D+time dataset and produces
as output a new 3D+time dataset (e.g., lowpass filter in time).

General Options:
  -prefix ppp  = Sets the prefix of the output dataset to be 'ppp'.
                   [default = 'smooth']
  -datum type  = Coerce output dataset to be stored as the given type.
                   [default = input data type]

Three Point Filtering Options [07 July 1999]
--------------------------------------------
The following options define the smoothing filter to be used.
All these filters  use 3 input points to compute one output point:
  Let a = input value before the current point
      b = input value at the current point
      c = input value after the current point
           [at the left end, a=b; at the right end, c=b]

  -lin = 3 point linear filter: 0.15*a + 0.70*b + 0.15*c
           [This is the default smoother]
  -med = 3 point median filter: median(a,b,c)
  -osf = 3 point order statistics filter:
           0.15*min(a,b,c) + 0.70*median(a,b,c) + 0.15*max(a,b,c)

  -3lin m = 3 point linear filter: 0.5*(1-m)*a + m*b + 0.5*(1-m)*c
              Here, 'm' is a number strictly between 0 and 1.

General Linear Filtering Options [03 Mar 2001]
----------------------------------------------
  -hamming N  = Use N point Hamming or Blackman windows.
  -blackman N     (N must be odd and bigger than 1.)
  -custom coeff_filename.1D (odd # of coefficients must be in a 
                             single column in ASCII file)
   (-custom added Jan 2003)
    WARNING: If you use long filters, you do NOT want to include the
             large early images in the program.  Do something like
                3dTsmooth -hamming 13 'fred+orig[4..$]'
             to eliminate the first 4 images (say).
 The following options determing how the general filters treat
 time points before the beginning and after the end:
  -EXTEND = BEFORE: use the first value; AFTER: use the last value
  -ZERO   = BEFORE and AFTER: use zero
  -TREND  = compute a linear trend, and extrapolate BEFORE and AFTER
 The default is -EXTEND.  These options do NOT affect the operation
 of the 3 point filters described above, which always use -EXTEND.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dTsort
Usage: 3dTsort [options] dataset
Sorts each voxel and produces a new dataset.

Options:
 -prefix p = use string 'p' for the prefix of the
               output dataset [DEFAULT = 'tsort']
 -inc      = sort into increasing order [default]
 -dec      = sort into decreasing order
 -rank     = output rank instead of sorted values
             ranks range from 1 to Nvals
 -ind      = output sorting index. (0 to Nvals -1)
             See example below.
 -val      = output sorted values (default)
 -datum D  = Coerce the output data to be stored as 
             the given type D, which may be  
             byte, short, or float (default).         

Notes:
* Each voxel is sorted separately.
* Sub-brick labels are not rearranged.
* This program is useful only in limited cases.
   It was written to sort the -stim_times_IM
   beta weights output by 3dDeconvolve.
* Also see program 1dTsort.

Examples:
setenv AFNI_1D_TIME YES
echo '8 6 3 9 2 7' > test.1D
    3dTsort -overwrite test.1D 
    1dcat tsort.1D

    3dTsort -overwrite -rank test.1D 
    1dcat tsort.1D


    3dTsort -overwrite -ind test.1D 
    1dcat tsort.1D

    3dTsort -overwrite -dec test.1D 
    1dcat tsort.1D


++ Compile date = Mar 13 2009




AFNI program: 3dTstat
Usage: 3dTstat [options] dataset
Computes one or more voxel-wise statistics for a 3D+time dataset
and stores them in a bucket dataset.  If no statistic option is
given, computes just the mean of each voxel time series.
Multiple statistics options may be given, and will result in
a multi-volume dataset.

Statistics Options:
 -mean   = compute mean of input voxels
 -sum    = compute sum of input voxels
 -abssum = compute absolute sum of input voxels
 -slope  = compute mean slope of input voxels vs. time
 -sos    = compute sum of squares
 -stdev  = compute standard deviation of input voxels
             [N.B.: this is computed after    ]
             [      the slope has been removed]
 -cvar   = compute coefficient of variation of input
             voxels = stdev/fabs(mean)
   **N.B.: You can add NOD to the end of the above 2
           options only, to turn off detrending, as in
             -stdevNOD  and/or  -cvarNOD

 -MAD    = compute MAD (median absolute deviation) of
             input voxels = median(|voxel-median(voxel)|)
             [N.B.: the trend is NOT removed for this]
 -DW    = compute Durbin-Watson Statistic of input voxels
             [N.B.: the trend IS removed for this]
 -median = compute median of input voxels  [undetrended]
 -min    = compute minimum of input voxels [undetrended]
 -max    = compute maximum of input voxels [undetrended]
 -absmax    = compute absolute maximum of input voxels [undetrended]
 -argmin    = index of minimum of input voxels [undetrended]
 -argmax    = index of maximum of input voxels [undetrended]
 -argabsmax = index of absolute maximum of input voxels [undetrended]
 -duration  = compute number of points around max above a threshold
              Use basepercent option to set limits
 -onset     = beginning of duration around max where value
              exceeds basepercent
 -offset    = end of duration around max where value
              exceeds basepercent
 -centroid  = compute centroid of data time curves
              (sum(i*f(i)) / sum(f(i)))
 -centduration = compute duration using centroid's index as center
 -nzmean    = compute mean of non-zero voxels

 -autocorr n = compute autocorrelation function and return
               first n coefficients
 -autoreg n = compute autoregression coefficients and return
               first n coefficients
   [N.B.: -autocorr 0 and/or -autoreg 0 will return number
          coefficients equal to the length of the input data]

 -accumulate = accumulate time series values (partial sums)
               val[i] = sum old_val[t] over t = 0..i
               (output length = input length)

 ** If no statistic option is given, then '-mean' is assumed **

Other Options:
 -prefix p = use string 'p' for the prefix of the
               output dataset [DEFAULT = 'stat']
 -datum d  = use data type 'd' for the type of storage
               of the output, where 'd' is one of
               'byte', 'short', or 'float' [DEFAULT=float]
 -basepercent nn = percentage of maximum for duration calculation

If you want statistics on a detrended dataset and the option
doesn't allow that, you can use program 3dDetrend first.

The output is a bucket dataset.  The input dataset may
use a sub-brick selection list, as in program 3dcalc.

++ Compile date = Mar 13 2009




AFNI program: 3dTwotoComplex
Usage #1: 3dTwotoComplex [options] dataset
Usage #2: 3dTwotoComplex [options] dataset1 dataset2

Converts 2 sub-bricks of input to a complex-valued dataset.
* If you have 1 input dataset, then sub-bricks [0..1] are
    used to form the 2 components of the output.
* If you have 2 input datasets, then the [0] sub-brick of
    each is used to form the components.
* Complex datasets have two 32-bit float components per voxel.

Options:
  -prefix ppp = Write output into dataset with prefix 'ppp'.
                  [default='cmplx']
  -RI         = The 2 inputs are real and imaginary parts.
                  [this is the default]
  -MP         = The 2 inputs are magnitude and phase.
                  [phase is in radians, please!]
  -mask mset  = Only output nonzero values where the mask
                  dataset 'mset' is nonzero.
Notes:
* Input datasets must be byte-, short-, or float-valued.
* You might calculate the component datasets using 3dcalc.
* At present, there is limited support for complex datasets.
    About the only thing you can do is display them in 2D
    slice windows in AFNI.

-- RWCox - March 2006

++ Compile date = Mar 13 2009




AFNI program: 3dUndump
Usage: 3dUndump [options] infile ...
Assembles a 3D dataset from an ASCII list of coordinates and
(optionally) values.

Options:
  -prefix ppp  = 'ppp' is the prefix for the output dataset
                   [default = undump].
  -master mmm  = 'mmm' is the master dataset, whose geometry
    *OR*           will determine the geometry of the output.
  -dimen I J K = Sets the dimensions of the output dataset to
                   be I by J by K voxels.  (Each I, J, and K
                   must be >= 1.)  This option can be used to
                   create a dataset of a specific size for test
                   purposes, when no suitable master exists.
          ** N.B.: Exactly one of -master or -dimen must be given.
  -mask kkk    = This option specifies a mask dataset 'kkk', which
                   will control which voxels are allowed to get
                   values set.  If the mask is present, only
                   voxels that are nonzero in the mask can be
                   set in the new dataset.
                   * A mask can be created with program 3dAutomask.
                   * Combining a mask with sphere insertion makes
                     a lot of sense (to me, at least).
  -datum type  = 'type' determines the voxel data type of the
                   output, which may be byte, short, or float
                   [default = short].
  -dval vvv    = 'vvv' is the default value stored in each
                   input voxel that does not have a value
                   supplied in the input file [default = 1].
  -fval fff    = 'fff' is the fill value, used for each voxel
                   in the output dataset that is NOT listed
                   in the input file [default = 0].
  -ijk         = Coordinates in the input file are (i,j,k) index
       *OR*        triples, as might be output by 3dmaskdump.
  -xyz         = Coordinates in the input file are (x,y,z)
                   spatial coordinates, in mm.  If neither
                   -ijk or -xyz is given, the default is -ijk.
          ** N.B.: -xyz can only be used with -master. If -dimen
                   is used to specify the size of the output dataset,
                   (x,y,z) coordinates are not defined (until you
                   use 3drefit to define the spatial structure).
  -srad rrr    = Specifies that a sphere of radius 'rrr' will be
                   filled about each input (x,y,z) or (i,j,k) voxel.
                   If the radius is not given, or is 0, then each
                   input data line sets the value in only one voxel.
                   * If '-master' is used, then 'rrr' is in mm.
                   * If '-dimen' is used, then 'rrr' is in voxels.
  -orient code = Specifies the coordinate order used by -xyz.
                   The code must be 3 letters, one each from the pairs
                   {R,L} {A,P} {I,S}.  The first letter gives the
                   orientation of the x-axis, the second the orientation
                   of the y-axis, the third the z-axis:
                     R = right-to-left         L = left-to-right
                     A = anterior-to-posterior P = posterior-to-anterior
                     I = inferior-to-superior  S = superior-to-inferior
                   If -orient isn't used, then the coordinate order of the
                   -master dataset is used to interpret (x,y,z) inputs.
          ** N.B.: If -dimen is used (which implies -ijk), then the
                   only use of -orient is to specify the axes ordering
                   of the output dataset.  If -master is used instead,
                   the output dataset's axes ordering is the same as the
                   -master dataset's, regardless of -orient.
  -head_only   =  A 'secret' option for creating only the .HEAD file which
                  gets exploited by the AFNI matlab library function
                  New_HEAD.m

Input File Format:
 The input file(s) are ASCII files, with one voxel specification per
 line.  A voxel specification is 3 numbers (-ijk or -xyz coordinates),
 with an optional 4th number giving the voxel value.  For example:

   1 2 3 
   3 2 1 5
   5.3 6.2 3.7
   // this line illustrates a comment

 The first line puts a voxel (with value given by -dval) at point
 (1,2,3).  The second line puts a voxel (with value 5) at point (3,2,1).
 The third line puts a voxel (with value given by -dval) at point
 (5.3,6.2,3.7).  If -ijk is in effect, and fractional coordinates
 are given, they will be rounded to the nearest integers; for example,
 the third line would be equivalent to (i,j,k) = (5,6,4).

Notes:
* This program creates a 1 sub-brick file.  You can 'glue' multiple
   files together using 3dbucket or 3dTcat to make multi-brick datasets.

* If one input filename is '-', then stdin will be used for input.

* If no input files are given, an 'empty' dataset is created.
   For example, to create an all zero dataset with 1 million voxels:
     3dUndump -dimen 100 100 100 -prefix AllZero

* By default, the output dataset is of type '-fim', unless the -master
   dataset is an anat type. You can change the output type using 3drefit.

* You could use program 1dcat to extract specific columns from a
   multi-column rectangular file (e.g., to get a specific sub-brick
   from the output of 3dmaskdump), and use the output of 1dcat as input
   to this program.

* [19 Feb 2004] The -mask and -srad options were added this day.
   Also, a fifth value on an input line, if present, is taken as a
   sphere radius to be used for that input point only.  Thus, input
      3.3 4.4 5.5 6.6 7.7
   means to put the value 6.6 into a sphere of radius 7.7 mm centered
   about (x,y,z)=(3.3,4.4,5.5).

* [10 Nov 2008] Commas (',') inside an input line are converted to
   spaces (' ') before the line is interpreted.  This feature is for
   convenience for people writing files in CSV (Comma Separated Values)
   format.

* [31 Dec 2008] Inputs of 'NaN' are explicitly converted to zero, and
  a warning message is printed.  AFNI programs do not deal with NaN
  floating point values!

-- RWCox -- October 2000

++ Compile date = Mar 13 2009




AFNI program: 3dUniformize
++ 3dUniformize: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. D. Ward
  Initializing... 
This program corrects for image intensity non-uniformity.

Usage: 
3dUniformize  
-anat filename    Filename of anat dataset to be corrected            
                                                                      
[-clip_low LOW]   Use LOW as the voxel intensity separating           
                  brain from air.                                     
[-clip_high HIGH] Do not include voxels with intensity higher         
                  than HIGH in calculations.                          
[-auto_clip]      Automatically set the clip levels.                  
                  LOW in a procedure similar to 3dClipLevel,          
                  HIGH is set to 3*LOW.                               
NOTE: The default (historic) clip_low value is 25. But that only works
      for certain types of input data and can result in bad output    
      depending on the range of values in the input dataset.          
      It is best you use -clip_low or -auto_clip options instead.     
[-niter NITER]    Set the number of iterations for concentrating PDF
                  Default is 5.
[-quiet]          Suppress output to screen                           
                                                                      
-prefix pname     Prefix name for file to contain corrected image     

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dUpsample
Usage: 3dUpsample [options] n dataset

* Upsamples a 3D+time dataset, in the time direction,
   by a factor of 'n'.
* The value of 'n' must be between 2 and 32 (inclusive).
* The output dataset is always in float format.
   [Because I'm lazy scum, that's why.]

Options:
--------
 -1 or -one = Use linear interpolation. Otherwise,
 or -linear   7th order polynomial interpolation is used.

 -prefix pp = Define the prefix name of the output dataset.
              [default prefix is 'Upsam']

 -verb      = Be eloquently and mellifluosly verbose.

Example:
--------
 3dUpsample -prefix LongFred 5 Fred+orig

Nota Bene:
----------
* You should not use this for files that were 3dTcat-ed across
   imaging run boundaries, since that will result in interpolating
   between non-contiguous time samples!
* If the input has M time points, the output will have n*M time
   points.  The last n-1 of them will be past the end of the original
   time series.

--- RW Cox - April 2008

++ Compile date = Mar 13 2009




AFNI program: 3dVol2Surf

3dVol2Surf - map data from a volume domain to a surface domain

  usage: 3dVol2Surf [options] -spec SPEC_FILE -sv SURF_VOL \
                    -grid_parent AFNI_DSET -map_func MAP_FUNC

This program is used to map data values from an AFNI volume
dataset to a surface dataset.  A filter may be applied to the
volume data to produce the value(s) for each surface node.

The surface and volume domains are spacially matched via the
'surface volume' AFNI dataset.  This gives each surface node xyz
coordinates, which are then matched to the input 'grid parent'
dataset.  This grid parent is an AFNI dataset containing the
data values destined for output.

Typically, two corresponding surfaces will be input (via the
spec file and the '-surf_A' and '-surf_B' options), along with
a mapping function and relevant options.  The mapping function
will act as a filter over the values in the AFNI volume.

Note that an alternative to using a second surface with the
'-surf_B' option is to define the second surface by using the
normals from the first surface.  By default, the second surface
would be defined at a distance of 1mm along the normals, but the
user may modify the applied distance (and direction).  See the
'-use_norms' and '-norm_len' options for more details.

For each pair of corresponding surface nodes, let NA be the node
on surface A (such as a white/grey boundary) and NB be the
corresponding node on surface B (such as a pial surface).  The
filter is applied to the volume data values along the segment
from NA to NB (consider the average or maximum as examples of
filters).

Note: if either endpoint of a segment is outside the grid parent
      volume, that node (pair) will be skipped.

Note: surface A corresponds to the required '-surf_A' argument,
      while surface B corresponds to '-surf_B'.

By default, this segment only consists of the endpoints, NA and
NB (the actual nodes on the two surfaces).  However the number
of evenly spaced points along the segment may be specified with
the -f_steps option, and the actual locations of NA and NB may
be altered with any of the -f_pX_XX options, covered below.

As an example, for each node pair, one could output the average
value from some functional dataset along a segment of 10 evenly
spaced points, where the segment endpoints are defined by the
xyz coordinates of the nodes.  This is example 3, below.

The mapping function (i.e. filter) is a required parameter to
the program.

Brief descriptions of the current mapping functions are as
follows.  These functions are defined over a segment of points.

    ave       : output the average of all voxel values along the
                segment
    mask      : output the voxel value for the trivial case of a
                segment - defined by a single surface point
    median    : output the median value from the segment
    midpoint  : output the dataset value at the segment midpoint
    mode      : output the mode of the values along the segment
    max       : output the maximum volume value over the segment
    max_abs   : output the dataset value with max abs over seg
    min       : output the minimum volume value over the segment
    seg_vals  : output _all_ volume values over the segment (one
                sub-brick only)

  --------------------------------------------------

  examples:

    1. Apply a single surface mask to output volume values over
       each surface node.  Output is one value per sub-brick
       (per surface node).

    3dVol2Surf                                \
       -spec         fred.spec                \
       -surf_A       smoothwm                 \
       -sv           fred_anat+orig           \
       -grid_parent  fred_anat+orig           \
       -map_func     mask                     \
       -out_1D       fred_anat_vals.1D

    2. Apply a single surface mask to output volume values over
       each surface node.  In this case restrict input to the
       mask implied by the -cmask option.  Supply additional
       debug output, and more for surface node 1874

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -sv           fred_anat+orig                           \
       -grid_parent 'fred_epi+orig[0]'                        \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)'  \
       -map_func     mask                                     \
       -debug        2                                        \
       -dnode        1874                                     \
       -out_niml     fred_epi_vals.niml.dset

    3. Given a pair of related surfaces, for each node pair,
       break the connected line segment into 10 points, and
       compute the average dataset value over those points.
       Since the index is nodes, each of the 10 points will be
       part of the average.  This could be changed so that only
       values from distinct volume nodes are considered (by
       changing the -f_index from nodes to voxels).  Restrict
       input voxels to those implied by the -cmask option
       Output is one average value per sub-brick (per surface
       node).

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_niml     fred_func_ave.niml.dset

    4. Similar to example 3, but restrict the output columns to
       only node indices and values (i.e. skip 1dindex, i, j, k
       and vals).

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -skip_col_1dindex                                      \
       -skip_col_i                                            \
       -skip_col_j                                            \
       -skip_col_k                                            \
       -skip_col_vals                                         \
       -out_niml     fred_func_ave_short.niml.dset

    5. Similar to example 3, but each of the node pair segments
       has grown by 10% on the inside of the first surface,
       and 20% on the outside of the second.  This is a 30%
       increase in the length of each segment.  To shorten the
       node pair segment, use a '+' sign for p1 and a '-' sign
       for pn.
       As an interesting side note, '-f_p1_fr 0.5 -f_pn_fr -0.5'
       would give a zero length vector identical to that of the
       'midpoint' filter.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      voxels                                   \
       -f_p1_fr      -0.1                                     \
       -f_pn_fr      0.2                                      \
       -out_niml     fred_func_ave2.niml.dset

    6. Similar to example 3, instead of computing the average
       across each segment (one average per sub-brick), output
       the volume value at _every_ point across the segment.
       The output here would be 'f_steps' values per node pair,
       though the output could again be restricted to unique
       voxels along each segment with '-f_index voxels'.
       Note that only sub-brick 0 will be considered here.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     seg_vals                                 \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_niml     fred_func_segvals_10.niml.dset

    7. Similar to example 6, but make sure there is output for
       every node pair in the surfaces.  Since it is expected
       that some nodes are out of bounds (meaning that they lie
       outside the domain defined by the grid parent dataset),
       the '-oob_value' option is added to include a default
       value of 0.0 in such cases.  And since it is expected
       that some node pairs are "out of mask" (meaning that
       their resulting segment lies entirely outside the cmask),
       the '-oom_value' was added to output the same default
       value of 0.0.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     seg_vals                                 \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -oob_value    0.0                                      \
       -oom_value    0.0                                      \
       -out_niml     fred_func_segvals_10_all.niml.dset

    8. This is a basic example of calculating the average along
       each segment, but where the segment is produced by only
       one surface, along with its set of surface normals.  The
       segments will be 2.5 mm in length.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_anat+orig                           \
       -use_norms                                             \
       -norm_len     2.5                                      \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_niml     fred_anat_norm_ave.2.5.niml.dset

    9. This is the same as example 8, but where the surface
       nodes are restricted to the range 1000..1999 via the
       options '-first_node' and '-last_node'.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_anat+orig                           \
       -first_node   1000                                     \
       -last_node    1999                                     \
       -use_norms                                             \
       -norm_len     2.5                                      \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_niml     fred_anat_norm_ave.2.5.niml.dset

   10. Create an EPI time-series surface dataset, suitable for
       performing single-subject processing on the surface.  So
       map a time-series onto each surface node.

       Note that any time shifting (3dTshift) or registration
       of volumes (3dvolreg) should be done before this step.

       After this step, the user can finish pre-processing with
       blurring (SurfSmooth) and scaling (3dTstat, 3dcalc),
       before performing the regression (3dDeconvolve).

    3dVol2Surf                                                \
       -spec                fred.spec                         \
       -surf_A              smoothwm                          \
       -surf_B              pial                              \
       -sv                  SurfVolAlndExp+orig               \
       -grid_parent         EPI_all_runs+orig                 \
       -map_func            ave                               \
       -f_steps             15                                \
       -f_index             nodes                             \
       -outcols_NSD_format                                    \
       -out_niml            EPI_runs.niml.dset

  --------------------------------------------------

  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE        : SUMA spec file

        e.g. -spec fred.spec

        The surface specification file contains the list of
        mappable surfaces that are used.

        See @SUMA_Make_Spec_FS and @SUMA_Make_Spec_SF.

    -surf_A SURF_NAME      : name of surface A (from spec file)
    -surf_B SURF_NAME      : name of surface B (from spec file)

        e.g. -surf_A smoothwm
        e.g. -surf_A lh.smoothwm
        e.g. -surf_B lh.pial

        This is used to specify which surface(s) will be used by
        the program.  The '-surf_A' parameter is required, as it
        specifies the first surface, whereas since '-surf_B' is
        used to specify an optional second surface, it is not
        required.

        Note that any need for '-surf_B' may be fulfilled using
        the '-use_norms' option.

        Note that any name provided must be in the spec file,
        uniquely matching the name of a surface node file (such
        as lh.smoothwm.asc, for example).  Note that if both
        hemispheres are represented in the spec file, then there
        may be both lh.pial.asc and rh.pial.asc, for instance.
        In such a case, 'pial' would not uniquely determine a
        a surface, but the name 'lh.pial' would.

    -sv SURFACE_VOLUME     : AFNI volume dataset

        e.g. -sv fred_anat+orig

        This is the AFNI dataset that the surface is mapped to.
        This dataset is used for the initial surface node to xyz
        coordinate mapping, in the Dicom orientation.

    -grid_parent AFNI_DSET : AFNI volume dataset

        e.g. -grid_parent fred_function+orig

        This dataset is used as a grid and orientation master
        for the output (i.e. it defines the volume domain).
        It is also the source of the output data values.

    -map_func MAP_FUNC     : filter for values along the segment

        e.g. -map_func ave
        e.g. -map_func ave -f_steps 10
        e.g. -map_func ave -f_steps 10 -f_index nodes

        The current mapping function for 1 surface is:

          mask     : For each surface xyz location, output the
                     dataset values of each sub-brick.

        Most mapping functions are defined for 2 related input
        surfaces (such as white/grey boundary and pial).  For
        each node pair, the function will be performed on the
        values from the 'grid parent dataset', and along the
        segment connecting the nodes.

          ave      : Output the average of the dataset values
                     along the segment.

          max      : Output the maximum dataset value along the
                     connecting segment.

          max_abs  : Output the dataset value with the maximum
                     absolute value along the segment.

          median   : Output the median of the dataset values
                     along the connecting segment.

          midpoint : Output the dataset value with xyz
                     coordinates at the midpoint of the nodes.

          min      : Output the minimum dataset value along the
                     connecting segment.

          mode     : Output the mode of the dataset values along
                     the connecting segment.

          seg_vals : Output all of the dataset values along the
                     connecting segment.  Here, only sub-brick
                     number 0 will be considered.

  ------------------------------

  options specific to functions on 2 surfaces:

          -f_steps NUM_STEPS :

                     Use this option to specify the number of
                     evenly spaced points along each segment.
                     The default is 2 (i.e. just use the two
                     surface nodes as endpoints).

                     e.g.     -f_steps 10
                     default: -f_steps 2

          -f_index TYPE :

                     This option specifies whether to use all
                     segment point values in the filter (using
                     the 'nodes' TYPE), or to use only those
                     corresponding to unique volume voxels (by
                     using the 'voxel' TYPE).

                     For instance, when taking the average along
                     one node pair segment using 10 node steps,
                     perhaps 3 of those nodes may occupy one
                     particular voxel.  In this case, does the
                     user want the voxel counted only once, or 3
                     times?  Each way makes sense.
                     
                     Note that this will only make sense when
                     used along with the '-f_steps' option.
                     
                     Possible values are "nodes", "voxels".
                     The default value is voxels.  So each voxel
                     along a segment will be counted only once.
                     
                     e.g.  -f_index nodes
                     e.g.  -f_index voxels
                     default: -f_index voxels

          -f_keep_surf_order :

                     Depreciated.

                     See required arguments -surf_A and -surf_B,
                     above.

          Note: The following -f_pX_XX options are used to alter
                the lengths and locations of the computational
                segments.  Recall that by default, segments are
                defined using the node pair coordinates as
                endpoints.  And the direction from p1 to pn is
                from the inner surface to the outer surface.

          -f_p1_mm DISTANCE :

                     This option is used to specify a distance
                     in millimeters to add to the first point of
                     each line segment (in the direction of the
                     second point).  DISTANCE can be negative
                     (which would set p1 to be farther from pn
                     than before).

                     For example, if a computation is over the
                     grey matter (from the white matter surface
                     to the pial), and it is wished to increase
                     the range by 1mm, set this DISTANCE to -1.0
                     and the DISTANCE in -f_pn_mm to 1.0.

                     e.g.  -f_p1_mm -1.0
                     e.g.  -f_p1_mm -1.0 -f_pn_mm 1.0

          -f_pn_mm DISTANCE :

                     Similar to -f_p1_mm, this option is used
                     to specify a distance in millimeters to add
                     to the second point of each line segment.
                     Note that this is in the same direction as
                     above, from point p1 to point pn.
                     
                     So a positive DISTANCE, for this option,
                     would set pn to be farther from p1 than
                     before, and a negative DISTANCE would set
                     it to be closer.

                     e.g.  -f_pn_mm 1.0
                     e.g.  -f_p1_mm -1.0 -f_pn_mm 1.0

          -f_p1_fr FRACTION :

                     Like the -f_pX_mm options above, this
                     is used to specify a change to point p1, in
                     the direction of point pn, but the change
                     is a fraction of the original distance,
                     not a pure change in millimeters.
                     
                     For example, suppose one wishes to do a
                     computation based on the segments spanning
                     the grey matter, but to add 20% to either
                     side.  Then use -0.2 and 0.2:

                     e.g.  -f_p1_fr -0.2
                     e.g.  -f_p1_fr -0.2 -f_pn_fr 0.2

          -f_pn_fr FRACTION :

                     See -f_p1_fr above.  Note again that the
                     FRACTION is in the direction from p1 to pn.
                     So to extend the segment past pn, this
                     FRACTION will be positive (and to reduce
                     the segment back toward p1, this -f_pn_fr
                     FRACTION will be negative).

                     e.g.  -f_pn_fr 0.2
                     e.g.  -f_p1_fr -0.2 -f_pn_fr 0.2

                     Just for entertainment, one could reverse
                     the order that the segment points are
                     considered by adjusting p1 to be pn, and
                     pn to be p1.  This could be done by adding
                     a fraction of 1.0 to p1 and by subtracting
                     a fraction of 1.0 from pn.

                     e.g.  -f_p1_fr 1.0 -f_pn_fr -1.0

  ------------------------------

  options specific to use of normals:

    Notes:

      o Using a single surface with its normals for segment
        creation can be done in lieu of using two surfaces.

      o Normals at surface nodes are defined by the average of
        the normals of the triangles including the given node.

      o The default normals have a consistent direction, but it
        may be opposite of what is should be.  For this reason,
        the direction is verified by default, and may be negated
        internally.  See the '-keep_norm_dir' option for more
        information.

    -use_norms             : use normals for second surface

        Segments are usually defined by connecting corresponding
        node pairs from two surfaces.  With this options the
        user can use one surface, along with its normals, to
        define the segments.

        By default, each segment will be 1.0 millimeter long, in
        the direction of the normal.  The '-norm_len' option
        can be used to alter this default action.

    -keep_norm_dir         : keep the direction of the normals

        Normal directions are verified by checking that the
        normals of the outermost 6 points point away from the
        center of mass.  If they point inward instead, then
        they are negated.

        This option will override the directional check, and
        use the normals as they come.

        See also -reverse_norm_dir, below.

    -norm_len LENGTH       : use LENGTH for node normals

        e.g.     -norm_len  3.0
        e.g.     -norm_len -3.0
        default: -norm_len  1.0

        For use with the '-use_norms' option, this allows the
        user to specify a directed distance to use for segments
        based on the normals.  So for each node on a surface,
        the computation segment will be from the node, in the
        direction of the normal, a signed distance of LENGTH.

        A negative LENGTH means to use the opposite direction
        from the normal.

        The '-surf_B' option is not allowed with the use of
        normals.

    -reverse_norm_dir      : reverse the normal directions

        Normal directions are verified by checking that the
        normals of the outermost 6 points point away from the
        center of mass.  If they point inward instead, then
        they are negated.

        This option will override the directional check, and
        reverse the direction of the normals as they come.

        See also -keep_norm_dir, above.

  ------------------------------

  output options:

    -debug LEVEL           :  (optional) verbose output

        e.g. -debug 2

        This option is used to print out status information 
        during the execution of the program.  Current levels are
        from 0 to 5.

    -first_node NODE_NUM   : skip all previous nodes

        e.g. -first_node 1000
        e.g. -first_node 1000 -last_node 1999

        Restrict surface node output to those with indices as
        large as NODE_NUM.  In the first example, the first 1000
        nodes are ignored (those with indices from 0 through
        999).

        See also, '-last_node'.

    -dnode NODE_NUM        :  (optional) node for debug

        e.g. -dnode 1874

        This option is used to print out status information 
        for node NODE_NUM.

    -out_1D OUTPUT_FILE    : specify a 1D file for the output

        e.g. -out_1D mask_values_over_dataset.1D

        This is where the user will specify which file they want
        the output to be written to.  In this case, the output
        will be in readable, column-formatted ASCII text.

        Note : the output file should not yet exist.
             : -out_1D or -out_niml must be used

    -out_niml OUTPUT_FILE  : specify a niml file for the output

        e.g. -out_niml mask_values_over_dataset.niml.dset

        The user may use this option to get output in the form
        of a niml element, with binary data.  The output will
        contain (binary) columns of the form:

            node_index  value_0  value_1  value_2  ...

        A major difference between 1D output and niml output is
        that the value_0 column number will be 6 in the 1D case,
        but will be 2 in the niml case.  The index columns will
        not be used for niml output.
        It is possible to write niml datasets in both ASCII and 
        BINARY formats. BINARY format is recommended for large
        datasets. The .afnirc environment variable:
        AFNI_NIML_TEXT_DATA controls whether output is
        ASCII (YES) or BINARY (NO).

        Note : the output file should not yet exist.
             : -out_1D or -out_niml must be used

    -help                  : show this help

        If you can't get help here, please get help somewhere.

    -hist                  : show revision history

        Display module history over time.

        See also, -v2s_hist

    -last_node NODE_NUM    : skip all following nodes

        e.g. -last_node 1999
        e.g. -first_node 1000 -last_node 1999

        Restrict surface node output to those with indices no
        larger than NODE_NUM.  In the first example, nodes above
        1999 are ignored (those with indices from 2000 on up).

        See also, '-first_node'.

    -no_headers            : do not output column headers

        Column header lines all begin with the '#' character.
        With the '-no_headers' option, these lines will not be
        output.

    -oob_index INDEX_NUM   : specify default index for oob nodes

        e.g.     -oob_index -1
        default: -oob_index  0

        By default, nodes which lie outside the box defined by
        the -grid_parent dataset are considered out of bounds,
        and are skipped.  If an out of bounds index is provided,
        or an out of bounds value is provided, such nodes will
        not be skipped, and will have indices and values output,
        according to the -oob_index and -oob_value options.
        
        This INDEX_NUM will be used for the 1dindex field, along
        with the i, j and k indices.
        

    -oob_value VALUE       : specify default value for oob nodes

        e.g.     -oob_value -999.0
        default: -oob_value    0.0

        See -oob_index, above.
        
        VALUE will be output for nodes which are out of bounds.

    -oom_value VALUE       : specify default value for oom nodes

        e.g. -oom_value -999.0
        e.g. -oom_value    0.0

        By default, node pairs defining a segment which gets
        completely obscured by a command-line mask (see -cmask)
        are considered "out of mask", and are skipped.

        If an out of mask value is provided, such nodes will not
        be skipped.  The output indices will come from the first
        segment point, mapped to the AFNI volume.  All output vN
        values will be the VALUE provided with this option.

        This option is meaningless without a '-cmask' option.

    -outcols_afni_NSD      : output nodes and one result column
    -outcols_1_result      : output only one result column
    -outcols_results       : output only all result columns
    -outcols_NSD_format    : output nodes and all results
                             (NI_SURF_DSET foramt)

        These options are used to restrict output.  They are
        similar to the -skip_col_* options, but are used to
        choose columns to output (they are for convenience, so
        the user need not apply many -skip_col options).

        see also: -skip_col_*

    -save_seg_coords FILE  : save segment coordinates to FILE

        e.g. -save_seg_coords seg.coords.1D

        Each node that has output values computed along a valid
        segment (i.e. not out-of-bounds or out-of-mask) has its
        index written to this file, along with all applied
        segment coordinates.

    -skip_col_nodes        : do not output node column
    -skip_col_1dindex      : do not output 1dindex column
    -skip_col_i            : do not output i column
    -skip_col_j            : do not output j column
    -skip_col_k            : do not output k column
    -skip_col_vals         : do not output vals column

        These options are used to restrict output.  Each option
        will prevent the program from writing that column of
        output to the 1D file.

        For now, the only effect that these options can have on
        the niml output is by skipping nodes or results (all
        other columns are skipped by default).

        see also: -outcols_*

    -v2s_hist              : show revision history for library

        Display vol2surf library history over time.

        See also, -hist

    -version               : show version information

        Show version and compile date.

  ------------------------------

  general options:

    -cmask MASK_COMMAND    : (optional) command for dataset mask

        e.g. -cmask '-a fred_func+orig[2] -expr step(a-0.8)'

        This option will produce a mask to be applied to the
        input AFNI dataset.  Note that this mask should form a
        single sub-brick.

        This option follows the style of 3dmaskdump (since the
        code for it was, uh, borrowed from there (thanks Bob!)).

        See '3dmaskdump -help' for more information.

    -gp_index SUB_BRICK    : choose grid_parent sub-brick

        e.g. -gp_index 3

        This option allows the user to choose only a single
        sub-brick from the grid_parent dataset for computation.
        Note that this option is virtually useless when using
        the command-line, as the user can more directly do this
        via brick selectors, e.g. func+orig'[3]'.
        
        This option was written for the afni interface.

  --------------------------------------------------

Output from the program defaults to 1D format, in ascii text.
For each node (pair) that results in output, there will be one
line, consisting of:

    node    : the index of the current node (or node pair)

    1dindex : the global index of the AFNI voxel used for output

              Note that for some filters (min, max, midpoint,
              median and mode) there is a specific location (and
              therefore voxel) that the result comes from.  It
              will be accurate (though median may come from one
              of two voxels that are averaged).

              For filters without a well-defined source (such as
              average or seg_vals), the 1dindex will come from
              the first point on the corresponding segment.

              Note: this will _not_ be output in the niml case.

    i j k   : the i j k indices matching 1dindex

              These indices are based on the orientation of the
              grid parent dataset.

              Note: these will _not_ be output in the niml case.

    vals    : the number of segment values applied to the filter

              Note that when -f_index is 'nodes', this will
              always be the same as -f_steps, except when using
              the -cmask option.  In that case, along a single 
              segment, some points may be in the mask, and some
              may not.

              When -f_index is 'voxels' and -f_steps is used,
              vals will often be much smaller than -f_steps.
              This is because many segment points may in a
              single voxel.

              Note: this will _not_ be output in the niml case.

    v0, ... : the requested output values

              These are the filtered values, usually one per
              AFNI sub-brick.  For example, if the -map_func
              is 'ave', then there will be one segment-based
              average output per sub-brick of the grid parent.

              In the case of the 'seg_vals' filter, however,
              there will be one output value per segment point
              (possibly further restricted to voxels).  Since
              output is not designed for a matrix of values,
              'seg_vals' is restricted to a single sub-brick.


  Author: R. Reynolds  - version  6.7 (Aug 23, 2006)

                (many thanks to Z. Saad and R.W. Cox)




AFNI program: 3dWarp

Usage: 3dWarp [options] dataset
Warp (spatially transform) a 3D dataset.
--------------------------
Transform Defining Options: [exactly one of these must be used]
--------------------------
  -matvec_in2out mmm = Read a 3x4 affine transform matrix+vector
                        from file 'mmm':
                         x_out = Matrix x_in + Vector

  -matvec_out2in mmm = Read a 3x4 affine transform matrix+vector
                         from file 'mmm':
                         x_in = Matrix x_out + Vector

     ** N.B.: The coordinate vectors described above are
               defined in DICOM ('RAI') coordinate order.
               (Also see the '-fsl_matvec option, below.)
     ** N.B.: Using the special name 'IDENTITY' for 'mmm'
               means to use the identity matrix.
     ** N.B.: You can put the matrix on the command line
               directly by using an argument of the form
       'MATRIX(a11,a12,a13,a14,a21,a22,a23,a24,a31,a32,a33,a34)'
               in place of 'mmm', where the aij values are the
               matrix entries (aij = i-th row, j-th column),
               separated by commas.
             * You will need the 'forward single quotes' around
               the argument.

  -tta2mni = Transform a dataset in Talairach-Tournoux Atlas
              coordinates to MNI-152 coordinates.
  -mni2tta = Transform a dataset in MNI-152 coordinates to
              Talairach-Tournoux Atlas coordinates.

  -matparent mset = Read in the matrix from WARPDRIVE_MATVEC_*
                     attributes in the header of dataset 'mset',
                     which must have been created by program
                     3dWarpDrive.  In this way, you can apply
                     a transformation matrix computed from
                     in 3dWarpDrive to another dataset.

     ** N.B.: The above option is analogous to the -rotparent
                option in program 3drotate.  Use of -matparent
                should be limited to datasets whose spatial
                coordinate system corresponds to that which
                was used for input to 3dWarpDrive (i.e., the
                input to 3dWarp should overlay properly with
                the input to 3dWarpDrive that generated the
                -matparent dataset).

  -card2oblique obl_dset or 
  -oblique_parent obl_dset = Read in the oblique transformation matrix
     from an oblique dataset and make cardinal dataset oblique to match.
  -deoblique or
  -oblique2card = Transform an oblique dataset to a cardinal dataset
     Both these oblique transformation options require a new grid for the
     output as specified with the -newgrid or -gridset options
     or a new grid will be assigned based on the minimum voxel spacing
    ** N.B.: EPI time series data should be time shifted with 3dTshift before                rotating the volumes to a cardinal direction

Sample usages:
 3dWarpDrive -affine_general -base d1+orig -prefix d2WW -twopass -input d2+orig
 3dWarp -matparent d2WW+orig -prefix epi2WW epi2+orig

 3dWarp -card2oblique oblique_epi+orig -prefix oblique_anat card_anat+orig
 3dWarp -oblique2card -prefix card_epi_tshift -newgrid 3.5 epi_tshift+orig


-----------------------
Other Transform Options:
-----------------------
  -linear     }
  -cubic      } = Chooses spatial interpolation method.
  -NN         } =   [default = linear]
  -quintic    }

  -fsl_matvec   = Indicates that the matrix file 'mmm' uses FSL
                    ordered coordinates ('LPI').  For use with
                    matrix files from FSL and SPM.

  -newgrid ddd  = Tells program to compute new dataset on a
                    new 3D grid, with spacing of 'ddd' mmm.
                  * If this option is given, then the new
                    3D region of space covered by the grid
                    is computed by warping the 8 corners of
                    the input dataset, then laying down a
                    regular grid with spacing 'ddd'.
                  * If this option is NOT given, then the
                    new dataset is computed on the old
                    dataset's grid.

  -gridset ggg  = Tells program to compute new dataset on the
                    same grid as dataset 'ggg'.

  -zpad N       = Tells program to pad input dataset with 'N'
                    planes of zeros on all sides before doing
                    transformation.
---------------------
Miscellaneous Options:
---------------------
  -verb         = Print out some information along the way.
  -prefix ppp   = Sets the prefix of the output dataset.


++ Compile date = Mar 13 2009




AFNI program: 3dWarpDrive

Usage: 3dWarpDrive [options] dataset
Warp a dataset to match another one (the base).

This program is a generalization of 3dvolreg.  It tries to find
a spatial transformation that warps a given dataset to match an
input dataset (given by the -base option).  It will be slow.

--------------------------
Transform Defining Options: [exactly one of these must be used]
--------------------------
  -shift_only         =  3 parameters (shifts)
  -shift_rotate       =  6 parameters (shifts + angles)
  -shift_rotate_scale =  9 parameters (shifts + angles + scale factors)
  -affine_general     = 12 parameters (3 shifts + 3x3 matrix)
  -bilinear_general   = 39 parameters (3 + 3x3 + 3x3x3)

  N.B.: At this time, the image intensity is NOT 
         adjusted for the Jacobian of the transformation.
  N.B.: -bilinear_general is not yet implemented.

-------------
Other Options:
-------------
  -linear   }
  -cubic    } = Chooses spatial interpolation method.
  -NN       } =   [default = linear; inaccurate but fast]
  -quintic  }     [for accuracy, try '-cubic -final quintic']

  -base bbb   = Load dataset 'bbb' as the base to which the
                  input dataset will be matched.
                  [This is a mandatory option]

  -verb       = Print out lots of information along the way.
  -prefix ppp = Sets the prefix of the output dataset.
                If 'ppp' is 'NULL', no output dataset is written.
  -input ddd  = You can put the input dataset anywhere in the
                  command line option list by using the '-input'
                  option, instead of always putting it last.
  -summ sss   = Save summary of calculations into text file 'sss'.
                  (N.B.: If 'sss' is '-', summary goes to stdout.)

-----------------
Technical Options:
-----------------
  -maxite    m  = Allow up to 'm' iterations for convergence.
  -delta     d  = Distance, in voxel size, used to compute
                   image derivatives using finite differences.
                   [Default=1.0]
  -weight  wset = Set the weighting applied to each voxel
                   proportional to the brick specified here.
                   [Default=computed by program from base]
  -thresh    t  = Set the convergence parameter to be RMS 't' voxels
                   movement between iterations.  [Default=0.03]
  -twopass      = Do the parameter estimation in two passes,
                   coarse-but-fast first, then fine-but-slow second
                   (much like the same option in program 3dvolreg).
                   This is useful if large-ish warping is needed to
                   align the volumes.
  -final 'mode' = Set the final warp to be interpolated using 'mode'
                   instead of the spatial interpolation method used
                   to find the warp parameters.
  -parfix n v   = Fix the n'th parameter of the warp model to
                   the value 'v'.  More than one -parfix option
                   can be used, to fix multiple parameters.
  -1Dfile ename = Write out the warping parameters to the file
                   named 'ename'.  Each sub-brick of the input
                   dataset gets one line in this file.  Each
                   parameter in the model gets one column.
  -float        = Write output dataset in float format, even if
                   input dataset is short or byte.
  -coarserot    = Initialize shift+rotation parameters by a
                   brute force coarse search, as in the similar
                   3dvolreg option.

  -1Dmatrix_save ff = Save base-to-input transformation matrices
                      in file 'ff' (1 row per sub-brick in the input
                      dataset).  If 'ff' does NOT end in '.1D', then
                      the program will append '.aff12.1D' to 'ff' to
                      make the output filename.
          *N.B.: This matrix is the coordinate transformation from base
                 to input DICOM coordinates.  To get the inverse matrix
                 (input-to-base), use the cat_matvec program, as in
                   cat_matvec fred.aff12.1D -I

----------------------
AFFINE TRANSFORMATIONS:
----------------------
The options below control how the affine tranformations
(-shift_rotate, -shift_rotate_scale, -affine_general)
are structured in terms of 3x3 matrices:

  -SDU or -SUD }= Set the order of the matrix multiplication
  -DSU or -DUS }= for the affine transformations:
  -USD or -UDS }=   S = triangular shear (params #10-12)
                    D = diagonal scaling matrix (params #7-9)
                    U = rotation matrix (params #4-6)
                  Default order is '-SDU', which means that
                  the U matrix is applied first, then the
                  D matrix, then the S matrix.

  -Supper      }= Set the S matrix to be upper or lower
  -Slower      }= triangular [Default=lower triangular]

  -ashift OR   }= Apply the shift parameters (#1-3) after OR
  -bshift      }= before the matrix transformation. [Default=after]

The matrices are specified in DICOM-ordered (x=-R+L,y=-A+P,z=-I+S)
coordinates as:

  [U] = [Rotate_y(param#6)] [Rotate_x(param#5)] [Rotate_z(param #4)]
        (angles are in degrees)

  [D] = diag( param#7 , param#8 , param#9 )

        [    1        0     0 ]        [ 1 param#10 param#11 ]
  [S] = [ param#10    1     0 ]   OR   [ 0    1     param#12 ]
        [ param#11 param#12 1 ]        [ 0    0        1     ]

 For example, the default (-SDU/-ashift/-Slower) has the warp
 specified as [x]_warped = [S] [D] [U] [x]_in + [shift].
 The shift vector comprises parameters #1, #2, and #3.

 The goal of the program is to find the warp parameters such that
   I([x]_warped) = s * J([x]_in)
 as closely as possible in a weighted least squares sense, where
 's' is a scaling factor (an extra, invisible, parameter), J(x)
 is the base image, I(x) is the input image, and the weight image
 is a blurred copy of J(x).

 Using '-parfix', you can specify that some of these parameters
 are fixed.  For example, '-shift_rotate_scale' is equivalent
 '-affine_general -parfix 10 0 -parfix 11 0 -parfix 12 0'.
 Don't attempt to use the '-parfix' option unless you understand
 this example!

-------------------------
  RWCox - November 2004
-------------------------

++ Compile date = Mar 13 2009




AFNI program: 3dWavelets
++ 3dWavelets: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
Program to perform wavelet analysis of an FMRI 3d+time dataset.        
                                                                       
Usage:                                                                 
3dWavelets                                                             
-type wname          wname = name of wavelet to use for the analysis   
                     At present, there are only two choices for wname: 
                        Haar  -->  Haar wavelets                       
                        Daub  -->  Daubechies wavelets                 
-input fname         fname = filename of 3d+time input dataset         
[-input1D dname]     dname = filename of single (fMRI) .1D time series 
[-mask mname]        mname = filename of 3d mask dataset               
[-nfirst fnum]       fnum = number of first dataset image to use in    
                       the wavelet analysis. (default = 0)             
[-nlast  lnum]       lnum = number of last dataset image to use in     
                       the wavelet analysis. (default = last)          
[-fdisp fval]        Write (to screen) results for those voxels        
                       whose F-statistic is >= fval                    
                                                                       
Filter options:                                                        
                                                                       
[-filt_stop band mintr maxtr] Specify wavelet coefs. to set to zero    
[-filt_base band mintr maxtr] Specify wavelet coefs. for baseline model
[-filt_sgnl band mintr maxtr] Specify wavelet coefs. for signal model  
     where  band  = frequency band                                     
            mintr = min. value for time window (in TR)                 
            maxtr = max. value for time window (in TR)                 
                                                                       
Output options:                                                        
                                                                       
[-datum DTYPE]      Coerce the output data to be stored as type DTYPE, 
                       which may be short or float. (default = short)  
                                                                       
[-coefts cprefix]   cprefix = prefix of 3d+time output dataset which   
                       will contain the forward wavelet transform      
                       coefficients                                    
                                                                       
[-fitts  fprefix]   fprefix = prefix of 3d+time output dataset which   
                       will contain the full model time series fit     
                       to the input data                               
                                                                       
[-sgnlts sprefix]   sprefix = prefix of 3d+time output dataset which   
                       will contain the signal model time series fit   
                       to the input data                               
                                                                       
[-errts  eprefix]   eprefix = prefix of 3d+time output dataset which   
                       will contain the residual error time series     
                       from the full model fit to the input data       
                                                                       
The following options control the contents of the bucket dataset:      
                                                                       
[-fout]            Flag to output the F-statistics                     
[-rout]            Flag to output the R^2 statistics                   
[-cout]            Flag to output the full model wavelet coefficients  
[-vout]            Flag to output the sample variance (MSE) map        
                                                                       
[-stat_first]      Flag to specify that the full model statistics will 
                     appear prior to the wavelet coefficients in the   
                     bucket dataset output                             
                                                                       
[-bucket bprefix]  bprefix = prefix of AFNI 'bucket' dataset containing
                     parameters of interest, such as the F-statistic   
                     for significance of the wavelet signal model.     

++ Compile date = Mar 13 2009




AFNI program: 3dWilcoxon
++ 3dWilcoxon: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs the nonparametric Wilcoxon signed-rank test 
for paired comparisons of two samples. 

Usage: 
3dWilcoxon                                                          
-dset 1 filename               data set for X observations          
 . . .                           . . .                              
-dset 1 filename               data set for X observations          
-dset 2 filename               data set for Y observations          
 . . .                           . . .                              
-dset 2 filename               data set for Y observations          
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixname                estimated population delta and       
                                 Wilcoxon signed-rank statistics are
                                 written to file prefixname         


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dWinsor
Usage: 3dWinsor [options] dataset
Apply a 3D 'Winsorizing' filter to a short-valued dataset.

Options:
 -irad rr   = include all points within 'distance'
                rr in the operation, where distance
                is defined as sqrt(i*i+j*j+k*k), and
                (i,j,k) are voxel index offsets
                [default rr=1.5]

 -cbot bb   = set bottom clip index to bb
                [default = 20% of the number of points]
 -ctop tt   = set top clip index to tt
                [default = 80% of the number of points]

 -nrep nn   = repeat filter nn times [default nn=1]
                if nn < 0, means to repeat filter until
                less than abs(n) voxels change

 -keepzero  = don't filter voxels that are zero
 -clip xx   = set voxels at or below 'xx' to zero

 -prefix pp = use 'pp' as the prefix for the output
                dataset [default pp='winsor']

 -mask mmm  = use 'mmm' as a mask dataset - voxels NOT
                in the mask won't be filtered

++ Compile date = Mar 13 2009




AFNI program: 3dZcat
Usage: 3dZcat [options] dataset dataset ...
Concatenates datasets in the slice (z) direction.  Each input
dataset must have the same number of voxels in each slice, and
must have the same number of sub-bricks.

Options:
  -prefix pname = Use 'pname' for the output dataset prefix name.
                    [default='zcat']
  -datum type   = Coerce the output data to be stored as the given
                    type, which may be byte, short, or float.
  -fscale       = Force scaling of the output to the maximum integer
                    range.  This only has effect if the output datum
                    is byte or short (either forced or defaulted).
                    This option is sometimes necessary to eliminate
                    unpleasant truncation artifacts.
  -nscale       = Don't do any scaling on output to byte or short datasets.
                    This may be especially useful when operating on mask
                    datasets whose output values are only 0's and 1's.
  -verb         = Print out some verbositiness as the program proceeds.
  -frugal       = Be 'frugal' in the use of memory, at the cost of I/O time.
                    Only needed if the program runs out of memory.

Command line arguments after the above are taken as input datasets.

Notes:
* You can use the '3dinfo' program to see how many slices a
    dataset comprises.
* There must be at least two datasets input (otherwise, the
    program doesn't make much sense, does it?).
* Each input dataset must have the same number of voxels in each
    slice, and must have the same number of sub-bricks.
* This program does not deal with complex-valued datasets.
* See the output of '3dZcutup -help' for a C shell script that
    can be used to take a dataset apart into single slice datasets,
    analyze them separately, and then assemble the results into
    new 3D datasets.

++ Compile date = Mar 13 2009




AFNI program: 3dZcutup
Usage: 3dZcutup [options] dataset
Cuts slices off a dataset in its z-direction, and writes a new
dataset.  The z-direction and number of slices in a dataset
can be determined using the 3dinfo program.
Options:
 -keep b t   = Keep slices numbered 'b' through 't', inclusive.
                 This is a mandatory option.  If you want to
                 create a single-slice dataset, this is allowed,
                 but AFNI may not display such datasets properly.
                 A single slice dataset would have b=t.  Slice
                 numbers start at 0.
 -prefix ppp = Write result into dataset with prefix 'ppp'
                 [default = 'zcutup']
Notes:
 * You can use a sub-brick selector on the input dataset.
 * 3dZcutup won't overwrite an existing dataset (I hope).
 * This program is adapted from 3dZeropad, which does the
     same thing, but along all 3 axes.
 * You can glue datasets back together in the z-direction
     using program 3dZcat.  A sample C shell script that
     uses these progams to carry out an analysis of a large
     dataset is:

  #!/bin/csh
  # Cut 3D+time dataset epi07+orig into individual slices

  foreach sl ( `count -dig 2 0 20` )
    3dZcutup -prefix zcut${sl} -keep $sl $sl epi07+orig

    # Analyze this slice with 3dDeconvolve separately

    3dDeconvolve -input zcut${sl}+orig.HEAD            \
                 -num_stimts 3                         \
                 -stim_file 1 ann_response_07.1D       \
                 -stim_file 2 antiann_response_07.1D   \
                 -stim_file 3 righthand_response_07.1D \
                 -stim_label 1 annulus                 \
                 -stim_label 2 antiann                 \
                 -stim_label 3 motor                   \
                 -stim_minlag 1 0  -stim_maxlag 1 0    \
                 -stim_minlag 2 0  -stim_maxlag 2 0    \
                 -stim_minlag 3 0  -stim_maxlag 3 0    \
                 -fitts zcut${sl}_fitts                \
                 -fout -bucket zcut${sl}_stats
  end

  # Assemble slicewise outputs into final datasets

  time 3dZcat -verb -prefix zc07a_fitts zcut??_fitts+orig.HEAD
  time 3dZcat -verb -prefix zc07a_stats zcut??_stats+orig.HEAD

  # Remove individual slice datasets

  /bin/rm -f zcut*

++ Compile date = Mar 13 2009




AFNI program: 3dZeropad
Usage: 3dZeropad [options] dataset
Adds planes of zeros to a dataset (i.e., pads it out).

Options:
  -I n = adds 'n' planes of zero at the Inferior edge
  -S n = adds 'n' planes of zero at the Superior edge
  -A n = adds 'n' planes of zero at the Anterior edge
  -P n = adds 'n' planes of zero at the Posterior edge
  -L n = adds 'n' planes of zero at the Left edge
  -R n = adds 'n' planes of zero at the Right edge
  -z n = adds 'n' planes of zeros on EACH of the
          dataset z-axis (slice-direction) faces

 -RL a = These options specify that planes should be added/cut
 -AP b = symmetrically to make the resulting volume have
 -IS c = 'a', 'b', and 'c' slices in the respective directions.

 -mm   = pad counts 'n' are in mm instead of slices:
         * each 'n' is an integer
         * at least 'n' mm of slices will be added/removed:
            n =  3 and slice thickness = 2.5 mm ==> 2 slices added
            n = -6 and slice thickness = 2.5 mm ==> 3 slices removed

 -master mset = match the volume described in dataset 'mset':
                * mset must have the same orientation and grid
                   spacing as dataset to be padded
                * the goal of -master is to make the output dataset
                   from 3dZeropad match the spatial 'extents' of
                   mset (cf. 3dinfo output) as much as possible,
                   by adding/subtracting slices as needed.
                * you can't use -I,-S,..., or -mm with -master

 -prefix ppp = write result into dataset with prefix 'ppp'
                 [default = 'zeropad']

Nota Bene:
 * You can use negative values of n to cut planes off the edges
     of a dataset.  At least one plane must be added/removed
     or the program won't do anything.
 * Anat parent and Talairach markers are NOT preserved in the
     new dataset.
 * If the old dataset has z-slice-dependent time offsets, and
     if new (zero filled) z-planes are added, the time offsets
     of the new slices will be set to zero.
 * You can use program '3dinfo' to find out how many planes
     a dataset has in each direction.
 * Program works for byte-, short-, float-, and complex-valued
     datasets.
 * You can use a sub-brick selector on the input dataset.
 * 3dZeropad won't overwrite an existing dataset (I hope).

 Author: RWCox - July 2000

++ Compile date = Mar 13 2009




AFNI program: 3dZregrid
Usage: 3dZregrid [option] dataset
Alters the input dataset's slice thickness and/or number.

OPTIONS:
 -dz D     = sets slice thickness to D mm
 -nz N     = sets slice count to N
 -zsize Z  = sets thickness of dataset (center-to-center of
              first and last slices) to Z mm
 -prefix P = write result in dataset with prefix P
 -verb     = write progress reports to stderr

At least one of '-dz', '-nz', or '-zsize' must be given.
On the other hand, using all 3 is over-specification.
The following combinations make sense:
 -dz only                   ==> N stays fixed from input dataset
                                 and then is like setting Z = N*D
 -dz and -nz together       ==> like setting Z = N*D
 -dz and -zsize together    ==> like setting N = Z/D
 -nz only                   ==> D stays fixed from input dataset
                                 and then is like setting Z = N*D
 -zsize only                ==> D stays fixed from input dataset
                                 and then is like setting N = Z/D
 -nsize and -zsize together ==> like setting D = Z/N

NOTES:
 * If the input is a 3D+time dataset with slice-dependent time
    offsets, the output will have its time offsets cleared.
    It probably makes sense to do 3dTshift BEFORE using this
    program in such a case.
 * The output of this program is centered around the same
    location as the input dataset.  Slices outside the
    original volume (e.g., when Z is increased) will be
    zero.  This is NOT the same as using 3dZeropad, which
    only adds zeros, and does not interpolate to a new grid.
 * Linear interpolation is used between slices.  However,
    new slice positions outside the old volume but within
    0.5 old slice thicknesses will get a copy of the last slice.
    New slices outside this buffer zone will be all zeros.

EXAMPLE:
 You have two 3D anatomical datasets from the same subject that
 need to be registered.  Unfortunately, the first one has slice
 thickness 1.2 mm and the second 1.3 mm.  Assuming they have
 the same number of slices, then do something like
  3dZregrid -dz 1.2 -prefix ElvisZZ Elvis2+orig
  3dvolreg -base Elvis1+orig -prefix Elvis2reg ElvisZZ+orig

++ Compile date = Mar 13 2009




AFNI program: 3danisosmooth
Usage: 3danisosmooth [options] dataset
Smooths a dataset using an anisotropic smoothing technique.

The output dataset is preferentially smoothed to preserve edges.

Options :
  -prefix pname = Use 'pname' for output dataset prefix name.
  -iters nnn = compute nnn iterations (default=10)
  -2D = smooth a slice at a time (default)
  -3D = smooth through slices. Can not be combined with 2D option
  -mask dset = use dset as mask to include/exclude voxels
  -automask = automatically compute mask for dataset
    Can not be combined with -mask
  -viewer = show central axial slice image every iteration.
    Starts aiv program internally.
  -nosmooth = do not do intermediate smoothing of gradients
  -sigma1 n.nnn = assign Gaussian smoothing sigma before
    gradient computation for calculation of structure tensor.
    Default = 0.5
  -sigma2 n.nnn = assign Gaussian smoothing sigma after
    gradient matrix computation for calculation of structure tensor.
    Default = 1.0
  -deltat n.nnn = assign pseudotime step. Default = 0.25
  -savetempdata = save temporary datasets each iteration.
    Dataset prefixes are Gradient, Eigens, phi, Dtensor.
    Ematrix, Flux and Gmatrix are also stored for the first sub-brick.
    Each is overwritten each iteration
  -phiding = use Ding method for computing phi (default)
  -phiexp = use exponential method for computing phi
  -noneg = set negative voxels to 0
  -edgefraction n.nnn = adjust the fraction of the anisotropic
    component to be added to the original image. Can vary between
    0 and 1. Default =0.5
  -datum type = Coerce the output data to be stored as the given type
    which may be byte, short or float. [default=float]
  -matchorig - match datum type and clip min and max to match input data
  -help = print this help screen

References:
  Z Ding, JC Gore, AW Anderson, Reduction of Noise in Diffusion
   Tensor Images Using Anisotropic Smoothing, Mag. Res. Med.,
   53:485-490, 2005
  J Weickert, H Scharr, A Scheme for Coherence-Enhancing
   Diffusion Filtering with Optimized Rotation Invariance,
   CVGPR Group Technical Report at the Department of Mathematics
   and Computer Science,University of Mannheim,Germany,TR 4/2000.
  J.Weickert,H.Scharr. A scheme for coherence-enhancing diffusion
   filtering with optimized rotation invariance. J Visual
   Communication and Image Representation, Special Issue On
   Partial Differential Equations In Image Processing,Comp Vision
   Computer Graphics, pages 103-118, 2002.
  Gerig, G., Kubler, O., Kikinis, R., Jolesz, F., Nonlinear
   anisotropic filtering of MRI data, IEEE Trans. Med. Imaging 11
   (2), 221-232, 1992.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3daxialize
Usage: 3daxialize [options] dataset
Purpose: Read in a dataset and write it out as a new dataset
         with the data brick oriented as axial slices.
         The input dataset must have a .BRIK file.
         One application is to create a dataset that can
         be used with the AFNI volume rendering plugin.

Options:
 -prefix ppp  = Use 'ppp' as the prefix for the new dataset.
               [default = 'axialize']
 -verb        = Print out a progress report.

The following options determine the order/orientation
in which the slices will be written to the dataset:
 -sagittal    = Do sagittal slice order [-orient ASL]
 -coronal     = Do coronal slice order  [-orient RSA]
 -axial       = Do axial slice order    [-orient RAI]
                 This is the default AFNI axial order, and
                 is the one currently required by the
                 volume rendering plugin; this is also
                 the default orientation output by this
                 program (hence the program's name).

 -orient code = Orientation code for output.
                The code must be 3 letters, one each from the
                pairs {R,L} {A,P} {I,S}.  The first letter gives
                the orientation of the x-axis, the second the
                orientation of the y-axis, the third the z-axis:
                 R = Right-to-left         L = Left-to-right
                 A = Anterior-to-posterior P = Posterior-to-anterior
                 I = Inferior-to-superior  S = Superior-to-inferior
                If you give an illegal code (e.g., 'LPR'), then
                the program will print a message and stop.
          N.B.: 'Neurological order' is -orient LPI

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dbuc2fim
This program converts bucket sub-bricks to fim (fico, fitt, fift, ...)
type dataset.                                                       

Usage:                                                              

3dbuc2fim  -prefix pname  d1+orig[index]                              
     This produces a fim dataset.                                   

 -or-                                                               

3dbuc2fim  -prefix pname  d1+orig[index1]  d2+orig[index2]            
     This produces a fico (fitt, fift, ...) dataset,                  
     depending on the statistic type of the 2nd subbrick,             
     with   d1+orig[index1] -> intensity sub-brick of pname           
            d2+orig[index2] -> threshold sub-brick of pname         

 -or-                                                               

3dbuc2fim  -prefix pname  d1+orig[index1,index2]                      
     This produces a fico (fitt, fift, ...) dataset,                  
     depending on the statistic type of the 2nd subbrick,             
     with   d1+orig[index1] -> intensity sub-brick of pname           
            d1+orig[index2] -> threshold sub-brick of pname         

where the options are:
     -prefix pname = Use 'pname' for the output dataset prefix name.
 OR  -output pname     [default='buc2fim']

     -session dir  = Use 'dir' for the output dataset session directory.
                       [default='./'=current working directory]
     -verb         = Print out some verbose output as the program
                       proceeds 

Command line arguments after the above are taken as input datasets.  
A dataset is specified using one of these forms:
   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
Sub-brick indexes start at 0. 

N.B.: The sub-bricks are output in the order specified, which may
 not be the order in the original datasets.  For example, using
           fred+orig[5,3]
 will cause the sub-brick #5 in fred+orig to be output as the intensity
 sub-brick, and sub-brick #3 to be output as the threshold sub-brick 
 in the new dataset.

N.B.: The '$', '(', ')', '[', and ']' characters are special to
 the shell, so you will have to escape them.  This is most easily
 done by putting the entire dataset plus selection list inside
 single quotes, as in 'fred+orig[5,9]'.


++ Compile date = Mar 13 2009




AFNI program: 3dbucket
Concatenate sub-bricks from input datasets into one big
'bucket' dataset.
Usage: 3dbucket options
where the options are:
     -prefix pname = Use 'pname' for the output dataset prefix name.
 OR  -output pname     [default='buck']

     -session dir  = Use 'dir' for the output dataset session directory.
                       [default='./'=current working directory]
     -glueto fname = Append bricks to the end of the 'fname' dataset.
                       This command is an alternative to the -prefix 
                       and -session commands.                        
     -dry          = Execute a 'dry run'; that is, only print out
                       what would be done.  This is useful when
                       combining sub-bricks from multiple inputs.
     -verb         = Print out some verbose output as the program
                       proceeds (-dry implies -verb).
     -fbuc         = Create a functional bucket.
     -abuc         = Create an anatomical bucket.  If neither of
                       these options is given, the output type is
                       determined from the first input type.

Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
You can also add a sub-brick selection list after the end of the
dataset name.  This allows only a subset of the sub-bricks to be
included into the output (by default, all of the input dataset
is copied into the output).  A sub-brick selection list looks like
one of the following forms:
  fred+orig[5]                     ==> use only sub-brick #5
  fred+orig[5,9,17]                ==> use #5, #9, and #17
  fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
  fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  fred+orig[0..$(3)]

N.B.: The sub-bricks are output in the order specified, which may
 not be the order in the original datasets.  For example, using
  fred+orig[0..$(2),1..$(2)]
 will cause the sub-bricks in fred+orig to be output into the
 new dataset in an interleaved fashion.  Using
  fred+orig[$..0]
 will reverse the order of the sub-bricks in the output.

N.B.: Bucket datasets have multiple sub-bricks, but do NOT have
 a time dimension.  You can input sub-bricks from a 3D+time dataset
 into a bucket dataset.  You can use the '3dinfo' program to see
 how many sub-bricks a 3D+time or a bucket dataset contains.

N.B.: The '$', '(', ')', '[', and ']' characters are special to
 the shell, so you will have to escape them.  This is most easily
 done by putting the entire dataset plus selection list inside
 single quotes, as in 'fred+orig[5..7,9]'.

N.B.: In non-bucket functional datasets (like the 'fico' datasets
 output by FIM, or the 'fitt' datasets output by 3dttest), sub-brick
 [0] is the 'intensity' and sub-brick [1] is the statistical parameter
 used as a threshold.  Thus, to create a bucket dataset using the
 intensity from dataset A and the threshold from dataset B, and
 calling the output dataset C, you would type
    3dbucket -prefix C -fbuc 'A+orig[0]' -fbuc 'B+orig[1]'

WARNING: using this program, it is possible to create a dataset that
         has different basic datum types for different sub-bricks
         (e.g., shorts for brick 0, floats for brick 1).
         Do NOT do this!  Very few AFNI programs will work correctly
         with such datasets!

++ Compile date = Mar 13 2009




AFNI program: 3dcalc
Program: 3dcalc                                                         
Author:  RW Cox et al                                                   
                                                                        
3dcalc - AFNI's calculator program                                      
                                                                        
     This program does voxel-by-voxel arithmetic on 3D datasets         
     (only limited inter-voxel computations are possible).              
                                                                        
     The program assumes that the voxel-by-voxel computations are being 
     performed on datasets that occupy the same space and have the same 
     orientations.                                                      
                                                                        
------------------------------------------------------------------------
Usage:                                                                  
-----                                                                   
       3dcalc -a dsetA [-b dsetB...] \                                 
              -expr EXPRESSION       \                                 
              [options]                                                 
                                                                        
Examples:                                                               
--------                                                                
1. Average datasets together, on a voxel-by-voxel basis:                
                                                                        
     3dcalc -a fred+tlrc -b ethel+tlrc -c lucy+tlrc \                  
            -expr '(a+b+c)/3' -prefix subjects_mean                     
                                                                        
   Averaging datasets can also be done by programs 3dMean and 3dmerge.  
   Use 3dTstat to averaging across sub-bricks in a single dataset.      
                                                                        
2. Perform arithmetic calculations between the sub-bricks of a single   
   dataset by noting the sub-brick number on the command line:          
                                                                        
     3dcalc -a 'func+orig[2]' -b 'func+orig[4]' -expr 'sqrt(a*b)'       
                                                                        
3. Create a simple mask that consists only of values in sub-brick #0    
   that are greater than 3.14159:                                       
                                                                        
     3dcalc -a 'func+orig[0]' -expr 'ispositive(a-3.14159)' \          
            -prefix mask                                                
                                                                        
4. Normalize subjects' time series datasets to percent change values in 
   preparation for group analysis:                                      
                                                                        
   Voxel-by-voxel, the example below divides each intensity value in    
   the time series (epi_r1+orig) with the voxel's mean value (mean+orig)
   to get a percent change value. The 'ispositive' command will ignore  
   voxels with mean values less than 167 (i.e., they are labeled as     
  'zero' in the output file 'percent_change+orig') and are most likely  
   background/noncortical voxels.                                       
                                                                        
     3dcalc -a epi_run1+orig -b mean+orig     \                        
            -expr '100 * a/b * ispositive(b-167)' -prefix percent_chng  
                                                                        
5. Create a compound mask from a statistical dataset, where 3 stimuli   
   show activation.                                                     
      NOTE: 'step' and 'ispositive' are identical expressions that can  
            be used interchangeably:                                    
                                                                        
     3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \ 
            -expr 'step(a-4.2)*step(b-2.9)*step(c-3.1)'              \ 
            -prefix compound_mask                                       
                                                                        
   In this example, all 3 statistical criteria must be met at once for  
   a voxel to be selected (value of 1) in this mask.                    
                                                                        
6. Same as example #5, but this time create a mask of 8 different values
   showing all combinations of activations (i.e., not only where        
   everything is active, but also each stimulus individually, and all   
   combinations).  The output mask dataset labels voxel values as such: 
                                                                        
        0 = none active    1 = A only active    2 = B only active       
        3 = A and B only   4 = C only active    5 = A and C only        
        6 = B and C only   7 = all A, B, and C active                   
                                                                        
     3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \ 
            -expr 'step(a-4.2)+2*step(b-2.9)+4*step(c-3.1)'          \ 
            -prefix mask_8                                              
                                                                        
   In displaying such a binary-encoded mask in AFNI, you would probably 
   set the color display to have 8 discrete levels (the '#' menu).      
                                                                        
7. Create a region-of-interest mask comprised of a 3-dimensional sphere.
   Values within the ROI sphere will be labeled as '1' while values     
   outside the mask will be labeled as '0'. Statistical analyses can    
   then be done on the voxels within the ROI sphere.                    
                                                                        
   The example below puts a solid ball (sphere) of radius 3=sqrt(9)     
   about the point with coordinates (x,y,z)=(20,30,70):                 
                                                                        
     3dcalc -a anat+tlrc                                              \
            -expr 'step(9-(x-20)*(x-20)-(y-30)*(y-30)-(z-70)*(z-70))' \
            -prefix ball                                                
                                                                        
   The spatial meaning of (x,y,z) is discussed in the 'COORDINATES'     
   section of this help listing (far below).                            
                                                                        
8. Some datsets are 'short' (16 bit) integers with a scalar attached,   
   which allow them to be smaller than float datasets and to contain    
   fractional values.                                                   
                                                                        
   Dataset 'a' is always used as a template for the output dataset. For 
   the examples below, assume that datasets d1+orig and d2+orig consist 
   of small integers.                                                   
                                                                        
   a) When dividing 'a' by 'b', the result should be scaled, so that a  
      value of 2.4 is not truncated to '2'. To avoid this truncation,   
      force scaling with the -fscale option:                            
                                                                        
        3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -fscale   
                                                                        
   b) If it is preferable that the result is of type 'float', then set  
      the output data type (datum) to float:                            
                                                                        
        3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot \        
                -datum float                                            
                                                                        
   c) Perhaps an integral division is desired, so that 9/4=2, not 2.24. 
      Force the results not to be scaled (opposite of example 8a) using 
      the -nscale option:                                               
                                                                        
        3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -nscale   
                                                                        
9. Compare the left and right amygdala between the Talairach atlas,     
   and the CA_N27_ML atlas.  The result will be 1 if TT only, 2 if CA   
   only, and 3 where they overlap.                                      
                                                                        
     3dcalc -a 'TT_Daemon::amygdala' -b 'CA_N27_ML::amygdala' \        
            -expr 'step(a)+2*step(b)'  -prefix compare.maps             
                                                                        
   (see 'whereami -help' for more information on atlases)               
                                                                        
10. Convert a dataset from AFNI short format storage to NIfTI-1 floating
    point (perhaps for input to an non-AFNI program that requires this):
                                                                        
      3dcalc -a zork+orig -prefix zfloat.nii -datum float -expr 'a'     
                                                                        
    This operation could also be performed with program 3dAFNItoNIFTI.  
                                                                        
11. Compute the edge voxels of a mask dataset.  An edge voxel is one    
    that shares some face with a non-masked voxel.  This computation    
    assumes 'a' is a binary mask (particularly for 'amongst').          
                                                                        
      3dcalc -a mask+orig -prefix edge                     \           
             -b a+i -c a-i -d a+j -e a-j -f a+k -g a-k     \           
             -expr 'a*amongst(0,b,c,d,e,f,g)'                           
                                                                        
    consider similar erosion or dilation operations:                    
        erosion:  -expr 'a*(1-amongst(0,b,c,d,e,f,g))'                  
        dilation: -expr 'amongst(1,a,b,c,d,e,f,g)'                      
                                                                        
------------------------------------------------------------------------
ARGUMENTS for 3dcalc (must be included on command line):                
---------                                                               
                                                                        
 -a dname    = Read dataset 'dname' and call the voxel values 'a' in the
               expression (-expr) that is input below. Up to 26 dnames  
               (-a, -b, -c, ... -z) can be included in a single 3dcalc  
               calculation/expression.                                  
               ** If some letter name is used in the expression, but    
                  not present in one of the dataset options here, then  
                  that variable is set to 0.                            
               ** If the letter is followed by a number, then that      
                  number is used to select the sub-brick of the dataset 
                  which will be used in the calculations.               
                     E.g., '-b3 dname' specifies that the variable 'b'  
                     refers to sub-brick '3' of that dataset            
                     (indexes in AFNI start at 0).                      
               ** However, it is better to use the subscript '[]' method
                  to select sub-bricks of datasets, as in               
                     -b dname+orig'[3]'                                 
                  rather than the older notation                        
                     -b3 dname+orig                                     
                  The subscript notation is more flexible, as it can    
                  be used to select a collection of sub-bricks.         
                                                                        
 -expr       = Apply the expression - within quotes - to the input      
               datasets (dnames), one voxel at time, to produce the     
               output dataset.                                          
                                                                        
 NOTE: If you want to average or sum up a lot of datasets, programs     
       3dTstat and/or 3dMean and/or 3dmerge are better suited for these 
       purposes.  A common request is to increase the number of input   
       datasets beyond 26, but in almost all cases such users simply    
       want to do simple addition!                                      
                                                                        
 NOTE: If you want to include shell variables in the expression (or in  
       the dataset sub-brick selection), then you should use double     
       "quotes" and the '$' notation for the shell variables; this    
       example uses csh notation to set the shell variable 'z':         
                                                                        
         set z = 3.5                                                    
         3dcalc -a moose.nii -prefix goose.nii -expr "a*$z"           
                                                                        
       The shell will not expand variables inside single 'quotes',      
       and 3dcalc's parser will not understand the '$' character.       
                                                                        
 NOTE: You can use the ccalc program to play with the expression        
       evaluator, in order to get a feel for how it works and           
       what it accepts.                                                 
                                                                        
------------------------------------------------------------------------
 OPTIONS for 3dcalc:                                                    
 -------                                                                
                                                                        
  -verbose   = Makes the program print out various information as it    
               progresses.                                              
                                                                        
  -datum type= Coerce the output data to be stored as the given type,   
               which may be byte, short, or float.                      
               [default = datum of first input dataset]                 
  -float }                                                              
  -short }   = Alternative options to specify output data format.       
  -byte  }                                                              
                                                                        
  -fscale    = Force scaling of the output to the maximum integer       
               range. This only has effect if the output datum is byte  
               or short (either forced or defaulted). This option is    
               often necessary to eliminate unpleasant truncation       
               artifacts.                                               
                 [The default is to scale only if the computed values   
                  seem to need it -- are all <= 1.0 or there is at      
                  least one value beyond the integer upper limit.]      
                                                                        
                ** In earlier versions of 3dcalc, scaling (if used) was 
                   applied to all sub-bricks equally -- a common scale  
                   factor was used.  This would cause trouble if the    
                   values in different sub-bricks were in vastly        
                   different scales. In this version, each sub-brick    
                   gets its own scale factor. To override this behavior,
                   use the '-gscale' option.                            
                                                                        
  -gscale    = Same as '-fscale', but also forces each output sub-brick 
               to get the same scaling factor.  This may be desirable   
               for 3D+time datasets, for example.                       
            ** N.B.: -usetemp and -gscale are incompatible!!            
                                                                        
  -nscale    = Don't do any scaling on output to byte or short datasets.
               This may be especially useful when operating on mask     
               datasets whose output values are only 0's and 1's.       
               ** Another way to achieve the effect of '-b3' is described
                  below in the dataset 'INPUT' specification section.   
                                                                        
  -prefix pname = Use 'pname' for the output dataset prefix name.       
                  [default='calc']                                      
                                                                        
  -session dir  = Use 'dir' for the output dataset session directory.   
                  [default='./'=current working directory]              
                  You can also include the output directory in the      
                  'pname' parameter to the -prefix option.              
                                                                        
  -usetemp      = With this option, a temporary file will be created to 
                  hold intermediate results.  This will make the program
                  run slower, but can be useful when creating huge      
                  datasets that won't all fit in memory at once.        
                * The program prints out the name of the temporary      
                  file; if 3dcalc crashes, you might have to delete     
                  this file manually.                                   
               ** N.B.: -usetemp and -gscale are incompatible!!         
                                                                        
  -dt tstep     = Use 'tstep' as the TR for "manufactured" 3D+time    
    *OR*          datasets.                                             
  -TR tstep     = If not given, defaults to 1 second.                   
                                                                        
  -taxis N      = If only 3D datasets are input (no 3D+time or .1D files),
    *OR*          then normally only a 3D dataset is calculated.  With  
  -taxis N:tstep: this option, you can force the creation of a time axis
                  of length 'N', optionally using time step 'tstep'.  In
                  such a case, you will probably want to use the pre-   
                  defined time variables 't' and/or 'k' in your         
                  expression, or each resulting sub-brick will be       
                  identical. For example:                               
                  '-taxis 121:0.1' will produce 121 points in time,     
                  spaced with TR 0.1.                                   
                                                                        
            N.B.: You can also specify the TR using the -dt option.     
            N.B.: You can specify 1D input datasets using the           
                  '1D:n@val,n@val' notation to get a similar effect.    
                  For example:                                          
                     -dt 0.1 -w '1D:121@0'                              
                  will have pretty much the same effect as              
                     -taxis 121:0.1
            N.B.: For both '-dt' and '-taxis', the 'tstep' value is in 
                  seconds.  You can suffix it with 'ms' to specify that
                  the value is in milliseconds instead; e.g., '-dt 2000ms'.
                                                                        
  -rgbfac A B C = For RGB input datasets, the 3 channels (r,g,b) are    
                  collapsed to one for the purposes of 3dcalc, using the
                  formula value = A*r + B*g + C*b                       
                                                                        
                  The default values are A=0.299 B=0.587 C=0.114, which 
                  gives the grayscale intensity.  To pick out the Green 
                  channel only, use '-rgbfac 0 1 0', for example.  Note 
                  that each channel in an RGB dataset is a byte in the  
                  range 0..255.  Thus, '-rgbfac 0.001173 0.002302 0.000447'
                  will compute the intensity rescaled to the range 0..1.0
                  (i.e., 0.001173=0.299/255, etc.)                      
                                                                        
  -cx2r METHOD  = For complex input datasets, the 2 channels must be    
                  converted to 1 real number for calculation.  The      
                  methods available are:  REAL  IMAG  ABS  PHASE        
                * The default method is ABS = sqrt(REAL^2+IMAG^2)       
                * PHASE = atan2(IMAG,REAL)                              
                * Multiple '-cx2r' options can be given:                
                    when a complex dataset is given on the command line,
                    the most recent previous method will govern.        
                * If a complex dataset is used in a differential        
                    subscript, then the most recent previous -cx2r      
                    method applies to the extraction; for example       
                      -cx2r REAL -a cx+orig -cx2r IMAG -b 'a[0,0,0,0]'  
                    means that variable 'a' refers to the real part     
                    of the input dataset and variable 'b' to the        
                    imaginary part of the input dataset.                
                * 3dcalc cannot be used to CREATE a complex dataset!    
                    [See program 3dTwotoComplex for that purpose.]      
                                                                        
  -sort         = Sort each output brick separately, before output:     
  -SORT           'sort' ==> increasing order, 'SORT' ==> decreasing.   
                  [This is useful only under unusual circumstances!]    
                  [Sorting is done in spatial indexes, not in time.]    
                  [Program 3dTsort will sort voxels along time axis]    
                                                                        
------------------------------------------------------------------------
DATASET TYPES:                                                          
-------------                                                           
                                                                        
 The most common AFNI dataset types are 'byte', 'short', and 'float'.   
                                                                        
 A byte value is an 8-bit signed integer (0..255), a short value ia a   
 16-bit signed integer (-32768..32767), and a float value is a 32-bit   
 real number.  A byte value has almost 3 decimals of accuracy, a short  
 has almost 5, and a float has approximately 7 (from a 23+1 bit         
 mantissa).                                                             
                                                                        
 Datasets can also have a scalar attached to each sub-brick. The main   
 use of this is allowing a short type dataset to take on non-integral   
 values, while being half the size of a float dataset.                  
                                                                        
 As an example, consider a short dataset with a scalar of 0.0001. This  
 could represent values between -32.768 and +32.767, at a resolution of 
 0.001.  One could represnt the difference between 4.916 and 4.917, for 
 instance, but not 4.9165. Each number has 15 bits of accuracy, plus a  
 sign bit, which gives 4-5 decimal places of accuracy. If this is not   
 enough, then it makes sense to use the larger type, float.             
                                                                        
------------------------------------------------------------------------
3D+TIME DATASETS:                                                       
----------------                                                        
                                                                        
 This version of 3dcalc can operate on 3D+time datasets.  Each input    
 dataset will be in one of these conditions:                            
                                                                        
    (A) Is a regular 3D (no time) dataset; or                           
    (B) Is a 3D+time dataset with a sub-brick index specified ('-b3'); or
    (C) Is a 3D+time dataset with no sub-brick index specified ('-b').  
                                                                        
 If there is at least one case (C) dataset, then the output dataset will
 also be 3D+time; otherwise it will be a 3D dataset with one sub-brick. 
 When producing a 3D+time dataset, datasets in case (A) or (B) will be  
 treated as if the particular brick being used has the same value at each
 point in time.                                                         
                                                                        
 Multi-brick 'bucket' datasets may also be used.  Note that if multi-brick
 (bucket or 3D+time) datasets are used, the lowest letter dataset will  
 serve as the template for the output; that is, '-b fred+tlrc' takes    
 precedence over '-c wilma+tlrc'.  (The program 3drefit can be used to  
 alter the .HEAD parameters of the output dataset, if desired.)         
                                                                        
------------------------------------------------------------------------
INPUT DATASET NAMES
-------------------
 An input dataset is specified using one of these forms:
    'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
 You can also add a sub-brick selection list after the end of the
 dataset name.  This allows only a subset of the sub-bricks to be
 read in (by default, all of a dataset's sub-bricks are input).
 A sub-brick selection list looks like one of the following forms:
   fred+orig[5]                     ==> use only sub-brick #5
   fred+orig[5,9,17]                ==> use #5, #9, and #17
   fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
   fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
 Sub-brick indexes start at 0.  You can use the character '$'
 to indicate the last sub-brick in a dataset; for example, you
 can select every third sub-brick by using the selection list
   fred+orig[0..$(3)]

 N.B.: The sub-bricks are read in the order specified, which may
 not be the order in the original dataset.  For example, using
   fred+orig[0..$(2),1..$(2)]
 will cause the sub-bricks in fred+orig to be input into memory
 in an interleaved fashion.  Using
   fred+orig[$..0]
 will reverse the order of the sub-bricks.

 N.B.: You may also use the syntax  after the name of an input 
 dataset to restrict the range of values read in to the numerical
 values in a..b, inclusive.  For example,
    fred+orig[5..7]<100..200>
 creates a 3 sub-brick dataset with values less than 100 or
 greater than 200 from the original set to zero.
 If you use the <> sub-range selection without the [] sub-brick
 selection, it is the same as if you had put [0..$] in front of
 the sub-range selection.

 N.B.: Datasets using sub-brick/sub-range selectors are treated as:
  - 3D+time if the dataset is 3D+time and more than 1 brick is chosen
  - otherwise, as bucket datasets (-abuc or -fbuc)
    (in particular, fico, fitt, etc datasets are converted to fbuc!)

 N.B.: The characters '$ ( ) [ ] < >'  are special to the shell,
 so you will have to escape them.  This is most easily done by
 putting the entire dataset plus selection list inside forward
 single quotes, as in 'fred+orig[5..7,9]', or double quotes "x".
                                                                        
** WARNING: you cannot combine sub-brick selection of the form          
               -b3 bambam+orig       (the old method)                   
            with sub-brick selection of the form                        
               -b  'bambam+orig[3]'  (the new method)                   
            If you try, the Doom of Mandos will fall upon you!          
                                                                        
------------------------------------------------------------------------
1D TIME SERIES:                                                         
--------------                                                          
                                                                        
 You can also input a '*.1D' time series file in place of a dataset.    
 In this case, the value at each spatial voxel at time index n will be  
 the same, and will be the n-th value from the time series file.        
 At least one true dataset must be input.  If all the input datasets    
 are 3D (single sub-brick) or are single sub-bricks from multi-brick    
 datasets, then the output will be a 'manufactured' 3D+time dataset.    
                                                                        
 For example, suppose that 'a3D+orig' is a 3D dataset:                  
                                                                        
   3dcalc -a a3D+orig -b b.1D -expr "a*b"                             
                                                                        
 The output dataset will 3D+time with the value at (x,y,z,t) being      
 computed by a3D(x,y,z)*b(t).  The TR for this dataset will be set      
 to 'tstep' seconds -- this could be altered later with program 3drefit.
 Another method to set up the correct timing would be to input an       
 unused 3D+time dataset -- 3dcalc will then copy that dataset's time    
 information, but simply do not use that dataset's letter in -expr.     
                                                                        
 If the *.1D file has multiple columns, only the first read will be     
 used in this program.  You can select a column to be the first by      
 using a sub-vector selection of the form 'b.1D[3]', which will         
 choose the 4th column (since counting starts at 0).                    
                                                                        
 '{...}' row selectors can also be used - see the output of '1dcat -help'
 for more details on these.  Note that if multiple timeseries or 3D+time
 or 3D bucket datasets are input, they must all have the same number of 
 points along the 'time' dimension.                                     
                                                                        
 N.B.: To perform calculations ONLY on .1D files, use program 1deval.   
       3dcalc takes .1D files for use in combination with 3D datasets!  
                                                                        
 N.B.: If you auto-transpose a .1D file on the command line, (by ending 
       the filename with \'), then 3dcalc will NOT treat it as the     
       special case described above, but instead will treat it as       
       a normal dataset, where each row in the transposed input is a    
       'voxel' time series.  This would allow you to do differential    
       subscripts on 1D time series, which program 1deval does not      
       implement.  For example:                                         
                                                                        
        3dcalc -a '1D: 3 4 5 6'\' -b a+l -expr 'sqrt(a+b)' -prefix -   
                                                                        
       This technique allows expression evaluation on multi-column      
       .1D files, which 1deval also does not implement.  For example:   
                                                                        
        3dcalc -a '1D: 3 4 5 | 1 2 3'\' -expr 'cbrt(a)' -prefix -      
                                                                        
------------------------------------------------------------------------
'1D:' INPUT:                                                            
-----------                                                             
                                                                        
 You can input a 1D time series 'dataset' directly on the command line, 
 without an external file.  The 'filename for such input takes the      
 general format                                                         
                                                                        
   '1D:n_1@val_1,n_2@val_2,n_3@val_3,...'                               
                                                                        
 where each 'n_i' is an integer and each 'val_i' is a float.  For       
 example                                                                
                                                                        
    -a '1D:5@0,10@1,5@0,10@1,5@0'                                       
                                                                        
 specifies that variable 'a' be assigned to a 1D time series of 35,     
 alternating in blocks between values 0 and value 1.                    
                                                                        
------------------------------------------------------------------------
'I:*.1D' and 'J:*.1D' and 'K:*.1D' INPUT:                               
----------------------------------------                                
                                                                        
 You can input a 1D time series 'dataset' to be defined as spatially    
 dependent instead of time dependent using a syntax like:               
                                                                        
   -c I:fred.1D                                                         
                                                                        
 This indicates that the n-th value from file fred.1D is to be associated
 with the spatial voxel index i=n (respectively j=n and k=n for 'J: and 
 K: input dataset names).  This technique can be useful if you want to  
 scale each slice by a fixed constant; for example:                     
                                                                        
   -a dset+orig -b K:slicefactor.1D -expr 'a*b'                         
                                                                        
 In this example, the '-b' value only varies in the k-index spatial     
 direction.                                                             
                                                                        
------------------------------------------------------------------------
COORDINATES and PREDEFINED VALUES:                                      
---------------------------------                                       
                                                                        
 If you don't use '-x', '-y', or '-z' for a dataset, then the voxel     
 spatial coordinates will be loaded into those variables.  For example, 
 the expression 'a*step(x*x+y*y+z*z-100)' will zero out all the voxels  
 inside a 10 mm radius of the origin x=y=z=0.                           
                                                                        
 Similarly, the '-t' value, if not otherwise used by a dataset or *.1D  
 input, will be loaded with the voxel time coordinate, as determined    
 from the header file created for the OUTPUT.  Please note that the units
 of this are variable; they might be in milliseconds, seconds, or Hertz.
 In addition, slices of the dataset might be offset in time from one    
 another, and this is allowed for in the computation of 't'.  Use program
 3dinfo to find out the structure of your datasets, if you are not sure.
 If no input datasets are 3D+time, then the effective value of TR is    
 tstep in the output dataset, with t=0 at the first sub-brick.          
                                                                        
 Similarly, the '-i', '-j', and '-k' values, if not otherwise used,     
 will be loaded with the voxel spatial index coordinates.  The '-l'     
 (letter 'ell') value will be loaded with the temporal index coordinate.
                                                                        
 Otherwise undefined letters will be set to zero.  In the future,       
 new default values for other letters may be added.                     
                                                                        
 NOTE WELL: By default, the coordinate order of (x,y,z) is the order in 
 *********  which the data array is stored on disk; this order is output
            by 3dinfo.  The options below control can change this order:
                                                                        
 -dicom }= Sets the coordinates to appear in DICOM standard (RAI) order,
 -RAI   }= (the AFNI standard), so that -x=Right, -y=Anterior , -z=Inferior,
                                        +x=Left , +y=Posterior, +z=Superior.
                                                                        
 -SPM   }= Sets the coordinates to appear in SPM (LPI) order,           
 -LPI   }=                      so that -x=Left , -y=Posterior, -z=Inferior,
                                        +x=Right, +y=Anterior , +z=Superior.
                                                                        
------------------------------------------------------------------------
DIFFERENTIAL SUBSCRIPTS [22 Nov 1999]:                                  
-----------------------                                                 
                                                                        
 Normal calculations with 3dcalc are strictly on a per-voxel basis:
 there is no 'cross-talk' between spatial or temporal locations.
 The differential subscript feature allows you to specify variables
 that refer to different locations, relative to the base voxel.
 For example,
   -a fred+orig -b 'a[1,0,0,0]' -c 'a[0,-1,0,0]' -d 'a[0,0,2,0]'
 means: symbol 'a' refers to a voxel in dataset fred+orig,
        symbol 'b' refers to the following voxel in the x-direction,
        symbol 'c' refers to the previous voxel in the y-direction
        symbol 'd' refers to the 2nd following voxel in the z-direction

 To use this feature, you must define the base dataset (e.g., 'a')
 first.  Then the differentially subscripted symbols are defined
 using the base dataset symbol followed by 4 integer subscripts,
 which are the shifts in the x-, y-, z-, and t- (or sub-brick index)
 directions. For example,

   -a fred+orig -b 'a[0,0,0,1]' -c 'a[0,0,0,-1]' -expr 'median(a,b,c)'

 will produce a temporal median smoothing of a 3D+time dataset (this
 can be done more efficiently with program 3dTsmooth).

 Note that the physical directions of the x-, y-, and z-axes depend
 on how the dataset was acquired or constructed.  See the output of
 program 3dinfo to determine what direction corresponds to what axis.

 For convenience, the following abbreviations may be used in place of
 some common subscript combinations:

   [1,0,0,0] == +i    [-1, 0, 0, 0] == -i
   [0,1,0,0] == +j    [ 0,-1, 0, 0] == -j
   [0,0,1,0] == +k    [ 0, 0,-1, 0] == -k
   [0,0,0,1] == +l    [ 0, 0, 0,-1] == -l

 The median smoothing example can thus be abbreviated as

   -a fred+orig -b a+l -c a-l -expr 'median(a,b,c)'

 When a shift calls for a voxel that is outside of the dataset range,
 one of three things can happen:

   STOP => shifting stops at the edge of the dataset
   WRAP => shifting wraps back to the opposite edge of the dataset
   ZERO => the voxel value is returned as zero

 Which one applies depends on the setting of the shifting mode at the
 time the symbol using differential subscripting is defined.  The mode
 is set by one of the switches '-dsSTOP', '-dsWRAP', or '-dsZERO'.  The
 default mode is STOP.  Suppose that a dataset has range 0..99 in the
 x-direction.  Then when voxel 101 is called for, the value returned is

   STOP => value from voxel 99 [didn't shift past edge of dataset]
   WRAP => value from voxel 1  [wrapped back through opposite edge]
   ZERO => the number 0.0 

 You can set the shifting mode more than once - the most recent setting
 on the command line applies when a differential subscript symbol is
 encountered.

N.B.: You can also use program 3dLocalstat to process data from a
      spatial neighborhood of each voxel; for example, to compute
      the maximum over a sphere of radius 9 mm placed around
      each voxel:
        3dLocalstat -nbhd 'SPHERE(9)' -stat max -prefix Amax9 A+orig

------------------------------------------------------------------------
ISSUES:
------ 

 * Complex-valued datasets cannot be processed, except via '-cx2r'.
 * This program is not very efficient (but is faster than it once was).
 * Differential subscripts slow the program down even more.

------------------------------------------------------------------------
------------------------------------------------------------------------
EXPRESSIONS:
----------- 

 As noted above, datasets are referred to by single letter variable names.
 Arithmetic expressions are allowed, using + - * / ** ^ and parentheses.
 C relational, boolean, and conditional expressions are NOT implemented!
 Built in functions include:

    sin  , cos  , tan  , asin  , acos  , atan  , atan2,       
    sinh , cosh , tanh , asinh , acosh , atanh , exp  ,       
    log  , log10, abs  , int   , sqrt  , max   , min  ,       
    J0   , J1   , Y0   , Y1    , erf   , erfc  , qginv, qg ,  
    rect , step , astep, bool  , and   , or    , mofn ,       
    sind , cosd , tand , median, lmode , hmode , mad  ,       
    gran , uran , iran , eran  , lran  , orstat,              
    mean , stdev, sem  , Pleg  , cbrt  , rhddc2, hrfbk4,hrfbk5

 where:
 * qg(x)    = reversed cdf of a standard normal distribution
 * qginv(x) = inverse function to qg
 * min, max, atan2 each take 2 arguments ONLY
 * J0, J1, Y0, Y1 are Bessel functions (see Watson)
 * Pleg(m,x) is the m'th Legendre polynomial evaluated at x
 * erf, erfc are the error and complementary error functions
 * sind, cosd, tand take arguments in degrees (vs. radians)
 * median(a,b,c,...) computes the median of its arguments
 * mad(a,b,c,...) computes the MAD of its arguments
 * mean(a,b,c,...) computes the mean of its arguments
 * stdev(a,b,c,...) computes the standard deviation of its arguments
 * sem(a,b,c,...) computes standard error of the mean of its arguments,
                  where sem(n arguments) = stdev(same)/sqrt(n)
 * orstat(n,a,b,c,...) computes the n-th order statistic of
    {a,b,c,...} - that is, the n-th value in size, starting
    at the bottom (e.g., orstat(1,a,b,c) is the minimum)
 * lmode(a,b,c,...) and hmode(a,b,c,...) compute the mode
    of their arguments - lmode breaks ties by choosing the
    smallest value with the maximal count, hmode breaks ties by
    choosing the largest value with the maximal count
    [median,lmode,hmode take a variable number of arguments]
 * gran(m,s) returns a Gaussian deviate with mean=m, stdev=s
 * uran(r)   returns a uniform deviate in the range [0,r]
 * iran(t)   returns a random integer in the range [0..t]
 * eran(s)   returns an exponentially distributed deviate
               with parameter s; mean=s
 * lran(t)   returns a logistically distributed deviate
               with parameter t; mean=0, stdev=t*1.814
 * hrfbk4(t,L) and hrfbk5(t,L) are the BLOCK4 and BLOCK5 hemodynamic
    response functions from 3dDeconvolve (L=stimulus duration in sec,
    and t is the time in sec since start of stimulus); for example:
 1deval -del 0.1 -num 400 -expr 'hrfbk5(t-2,20)' | 1dplot -stdin -del 0.1
    These HRF functions are scaled to return values in the range [0..1]

 You may use the symbol 'PI' to refer to the constant of that name.
 This is the only 2 letter symbol defined; all variables are
 referred to by 1 letter symbols.  The case of the expression is
 ignored (in fact, it is converted to uppercase as the first step
 in the parsing algorithm).

 The following functions are designed to help implement logical
 functions, such as masking of 3D volumes against some criterion:
       step(x)    = {1 if x>0        , 0 if x<=0},
       astep(x,y) = {1 if abs(x) > y , 0 otherwise} = step(abs(x)-y)
       rect(x)    = {1 if abs(x)<=0.5, 0 if abs(x)>0.5},
       bool(x)    = {1 if x != 0.0   , 0 if x == 0.0},
    notzero(x)    = bool(x),
     iszero(x)    = 1-bool(x) = { 0 if x != 0.0, 1 if x == 0.0 },
     equals(x,y)  = 1-bool(x-y) = { 1 if x == y , 0 if x != y },
   ispositive(x)  = { 1 if x > 0; 0 if x <= 0 },
   isnegative(x)  = { 1 if x < 0; 0 if x >= 0 },
   and(a,b,...,c) = {1 if all arguments are nonzero, 0 if any are zero}
    or(a,b,...,c) = {1 if any arguments are nonzero, 0 if all are zero}
  mofn(m,a,...,c) = {1 if at least 'm' arguments are nonzero, else 0 }
  argmax(a,b,...) = index of largest argument; = 0 if all args are 0
  argnum(a,b,...) = number of nonzero arguments
  pairmax(a,b,...)= finds the 'paired' argument that corresponds to the
                    maximum of the first half of the input arguments;
                    for example, pairmax(a,b,c,p,q,r) determines which
                    of {a,b,c} is the max, then returns corresponding
                    value from {p,q,r}; requires even number of args.
  pairmin(a,b,...)= Similar to pairmax, but for minimum; for example,
                    pairmin(a,b,c,p,q,r} finds the minimum of {a,b,c}
                    and returns the corresponding value from {p,q,r};
                      pairmin(3,2,7,5,-1,-2,-3,-4) = -2
                    (The 'pair' functions are Lukas Pezawas specials!)
  amongst(a,b,...)= Return value is 1 if any of the b,c,... values
                    equals the a value; otherwise, return value is 0.

  [These last 8 functions take a variable number of arguments.]

 The following 27 new [Mar 1999] functions are used for statistical
 conversions, as in the program 'cdf':
   fico_t2p(t,a,b,c), fico_p2t(p,a,b,c), fico_t2z(t,a,b,c),
   fitt_t2p(t,a)    , fitt_p2t(p,a)    , fitt_t2z(t,a)    ,
   fift_t2p(t,a,b)  , fift_p2t(p,a,b)  , fift_t2z(t,a,b)  ,
   fizt_t2p(t)      , fizt_p2t(p)      , fizt_t2z(t)      ,
   fict_t2p(t,a)    , fict_p2t(p,a)    , fict_t2z(t,a)    ,
   fibt_t2p(t,a,b)  , fibt_p2t(p,a,b)  , fibt_t2z(t,a,b)  ,
   fibn_t2p(t,a,b)  , fibn_p2t(p,a,b)  , fibn_t2z(t,a,b)  ,
   figt_t2p(t,a,b)  , figt_p2t(p,a,b)  , figt_t2z(t,a,b)  ,
   fipt_t2p(t,a)    , fipt_p2t(p,a)    , fipt_t2z(t,a)    .

 See the output of 'cdf -help' for documentation on the meanings of
 and arguments to these functions.  The two functions below use the
 NIfTI-1 statistical codes to map between statistical values and
 cumulative distribution values:
   cdf2stat(val,code,p1,p2,3)
   stat2cdf(val,code,p1,p2,3)

** If you modify a statistical sub-brick, you may want to use program
  '3drefit' to modify the dataset statistical auxiliary parameters.

** Computations are carried out in double precision before being
   truncated to the final output 'datum'.

** Note that the quotes around the expression are needed so the shell
   doesn't try to expand * characters, or interpret parentheses.

** Try the 'ccalc' program to see how the expression evaluator works.
   The arithmetic parser and evaluator is written in Fortran-77 and
   is derived from a program written long ago by RW Cox to facilitate
   compiling on an array processor hooked up to a VAX. (It's a mess, but
   it works - somewhat slowly - but hey, computers are fast these days.)

++ Compile date = Mar 13 2009




AFNI program: 3dclust


Program: 3dclust 
Author:  RW Cox et al 
Date:    21 Jul 2005 

3dclust - performs simple-minded cluster detection in 3D datasets       
                                                                        
     This program can be used to find clusters of 'active' voxels and   
     print out a report about them.                                     
      * 'Active' refers to nonzero voxels that survive the threshold    
         that you (the user) have specified                             
      * Clusters are defined by a connectivity radius parameter 'rmm'   
                                                                        
      Note: by default, this program clusters on the absolute values    
            of the voxels                                               
----------------------------------------------------------------------- 
Usage: 3dclust [editing options] [other options] rmm vmul dset ...      
-----                                                                   
                                                                        
Examples:                                                               
--------                                                                
                                                                        
    3dclust         -1clip   0.3  5 2000 func+orig'[1]'                 
    3dclust -1noneg -1thresh 0.3  5 2000 func+orig'[1]'                 
    3dclust -1noneg -1thresh 0.3  5 2000 func+orig'[1]' func+orig'[3]   
                                                                        
    3dclust -noabs  -1clip 0.5   -dxyz=1  1  10 func+orig'[1]'          
    3dclust -noabs  -1clip 0.5            5 700 func+orig'[1]'          
                                                                        
    3dclust -noabs  -2clip 0 999 -dxyz=1 1  10 func+orig'[1]'           
                                                                        
    3dclust                   -1clip 0.3  5 3000 func+orig'[1]'         
    3dclust -quiet            -1clip 0.3  5 3000 func+orig'[1]'         
    3dclust -summarize -quiet -1clip 0.3  5 3000 func+orig'[1]'         
    3dclust -1Dformat         -1clip 0.3  5 3000 func+orig'[1]' > out.1D
----------------------------------------------------------------------- 
                                                                        
Arguments (must be included on command line):                           
---------                                                               
                                                                        
   rmm            : cluster connection radius (in millimeters).         
                    All nonzero voxels closer than rmm millimeters      
                    (center-to-center distance) to the given voxel are  
                    included in the cluster.                            
                     * If rmm = 0, then clusters are defined by nearest-
                       neighbor connectivity                            
                                                                        
   vmul           : minimum cluster volume (micro-liters)               
                    i.e., determines the size of the volume cluster.    
                     * If vmul = 0, then all clusters are kept.         
                     * If vmul < 0, then the absolute vmul is the minimum
                          number of voxels allowed in a cluster.        
                                                                        
   dset           : input dataset (more than one allowed, but only the  
                    first sub-brick of the dataset)                     
                                                                        
 The results are sent to standard output (i.e., the screen)             
                                                                        
----------------------------------------------------------------------- 
                                                                        
Options:                                                                
-------                                                                 
                                                                        
* Editing options are as in 3dmerge (see 3dmerge -help)                 
  (including -1thresh, -1dindex, -1tindex, -dxyz=1 options)             
                                                                        
* -noabs      => Use the signed voxel intensities (not the absolute     
                 value) for calculation of the mean and Standard        
                 Error of the Mean (SEM)                                
                                                                        
* -summarize  => Write out only the total nonzero voxel                 
                 count and volume for each dataset                      
                                                                        
* -nosum      => Suppress printout of the totals                        
                                                                        
* -verb       => Print out a progress report (to stderr)                
                 as the computations proceed                            
                                                                        
* -1Dformat   => Write output in 1D format (now default). You can       
                 redirect the output to a .1D file and use the file     
                 as input to whereami for obtaining Atlas-based         
                 information on cluster locations.                      
                 See whereami -help for more info.                      
* -no_1Dformat=> Do not write output in 1D format.                      
                                                                        
* -quiet      => Suppress all non-essential output                      
                                                                        
* -mni        => If the input dataset is in +tlrc coordinates, this     
                 option will stretch the output xyz-coordinates to the  
                 MNI template brain.                                    
                                                                        
           N.B.1: The MNI template brain is about 5 mm higher (in S),   
                  10 mm lower (in I), 5 mm longer (in PA), and tilted   
                  about 3 degrees backwards, relative to the Talairach- 
                  Tournoux Atlas brain.  For more details, see          
                    http://www.mrc-cbu.cam.ac.uk/Imaging/mnispace.html  
           N.B.2: If the input dataset is not in +tlrc coordinates,     
                  then the only effect is to flip the output coordinates
                  to the 'LPI' (neuroscience) orientation, as if you    
                  gave the '-orient LPI' option.)                       
                                                                        
* -isovalue   => Clusters will be formed only from contiguous (in the   
                 rmm sense) voxels that also have the same value.       
                                                                        
           N.B.:  The normal method is to cluster all contiguous        
                  nonzero voxels together.                              
                                                                        
* -isomerge   => Clusters will be formed from each distinct value       
                 in the dataset; spatial contiguity will not be         
                 used (but you still have to supply rmm and vmul        
                 on the command line).                                  
                                                                        
           N.B.:  'Clusters' formed this way may well have components   
                   that are widely separated!                           
                                                                        
* -prefix ppp => Write a new dataset that is a copy of the              
                 input, but with all voxels not in a cluster            
                 set to zero; the new dataset's prefix is 'ppp'         
                                                                        
           N.B.:  Use of the -prefix option only affects the            
                  first input dataset                                   
----------------------------------------------------------------------- 
                                                                        
E.g., 3dclust -1clip 0.3  5  3000 func+orig'[1]'                        
                                                                        
  The above command tells 3dclust to find potential cluster volumes for 
  dataset func+orig, sub-brick #1, where the threshold has been set     
  to 0.3 (i.e., ignore voxels with an activation threshold of >0.3 or   
  <-0.3.  Voxels must be no more than 5 mm apart, and the cluster volume
  must be at least 3000 micro-liters in size.                           
                                                                        
Explanation of 3dclust Output:                                          
-----------------------------                                           
                                                                        
   Volume       : Volume that makes us the cluster, in microliters (mm^3)
                  (or the number of voxels, if -dxyz=1 is given)        
                                                                        
   CM RL        : Center of mass (CM) for the cluster in the Right-Left 
                  direction (i.e., the coordinates for the CM)          
                                                                        
   CM AP        : Center of mass for the cluster in the                 
                  Anterior-Posterior direction                          
                                                                        
   CM IS        : Center of mass for the cluster in the                 
                  Inferior-Superior direction                           
                                                                        
   minRL, maxRL : Bounding box for the cluster, min and max             
                  coordinates in the Right-Left direction               
                                                                        
   minAP, maxAP : Min and max coordinates in the Anterior-Posterior     
                  direction of the volume cluster                       
                                                                        
   minIS, max IS: Min and max coordinates in the Inferior-Superior      
                  direction of the volume cluster                       
                                                                        
   Mean         : Mean value for the volume cluster                     
                                                                        
   SEM          : Standard Error of the Mean for the volume cluster     
                                                                        
   Max Int      : Maximum Intensity value for the volume cluster        
                                                                        
   MI RL        : Coordinate of the Maximum Intensity value in the      
                  Right-Left direction of the volume cluster            
                                                                        
   MI AP        : Coordinate of the Maximum Intensity value in the      
                  Anterior-Posterior direction of the volume cluster    
                                                                        
   MI IS        : Coordinate of the Maximum Intensity value in the      
                  Inferior-Superior direction of the volume cluster     
----------------------------------------------------------------------- 
                                                                        
Nota Bene:                                                              
                                                                        
   * The program does not work on complex- or rgb-valued datasets!      
                                                                        
   * Using the -1noneg option is strongly recommended!                  
                                                                        
   * 3D+time datasets are allowed, but only if you use the              
     -1tindex and -1dindex options.                                     
                                                                        
   * Bucket datasets are allowed, but you will almost certainly         
     want to use the -1tindex and -1dindex options with these.          
                                                                        
   * SEM values are not realistic for interpolated data sets!           
     A ROUGH correction is to multiply the SEM of the interpolated      
     data set by the square root of the number of interpolated          
     voxels per original voxel.                                         
                                                                        
   * If you use -dxyz=1, then rmm should be given in terms of           
     voxel edges (not mm) and vmul should be given in terms of          
     voxel counts (not microliters).  Thus, to connect to only          
     3D nearest neighbors and keep clusters of 10 voxels or more,       
     use something like '3dclust -dxyz=1 1.01 10 dset+orig'.            
     In the report, 'Volume' will be voxel count, but the rest of       
     the coordinate dependent information will be in actual xyz         
     millimeters.                                                       
                                                                        
  * The default coordinate output order is DICOM.  If you prefer        
    the SPM coordinate order, use the option '-orient LPI' or           
    set the environment variable AFNI_ORIENT to 'LPI'.  For more        
    information, see file README.environment.                           

++ Compile date = Mar 13 2009




AFNI program: 3dcopy
Usage 1: 3dcopy [-verb] [-denote] old_prefix new_prefix
  Will copy all datasets using the old_prefix to use the new_prefix;
    3dcopy fred ethel
  will copy   fred+orig.HEAD    to ethel+orig.HEAD
              fred+orig.BRIK    to ethel+orig.BRIK
              fred+tlrc.HEAD    to ethel+tlrc.HEAD
              fred+tlrc.BRIK.gz to ethel+tlrc.BRIK.gz

Usage 2: 3dcopy old_prefix+view new_prefix
  Will copy only the dataset with the given view (orig, acpc, tlrc).

Usage 3: 3dcopy old_dataset new_prefix
  Will copy the non-AFNI formatted dataset (e.g., MINC, ANALYZE, CTF)
  to the AFNI formatted dataset with the given new prefix.

Notes:
* The new datasets have new ID codes.  If you are renaming
   multiple datasets (as in Usage 1), then if the old +orig
   dataset is the warp parent of the old +acpc and/or +tlrc
   datasets, then the new +orig dataset will be the warp
   parent of the new +acpc and +tlrc datasets.  If any other
   datasets point to the old datasets as anat or warp parents,
   they will still point to the old datasets, not these new ones.
* The BRIK files are copied if they exist, keeping the compression
   suffix unchanged (if any).
* The old_prefix may have a directory name attached in front,
   as in 'gerard/manley/hopkins'.
* If the new_prefix does not have a directory name attached
   (i.e., does NOT look like 'homer/simpson'), then the new
   datasets will be written in the current directory ('./').
* The new_prefix cannot JUST be a directory (unlike the Unix
   utility 'cp'); you must supply a filename prefix, even if
   is identical to the filename prefix in old_prefix.
* The '-verb' option will print progress reports; otherwise, the
   program operates silently (unless an error is detected).
* The '-denote' option will remove any Notes from the file.

++ Compile date = Mar 13 2009




AFNI program: 3ddelay
++ 3ddelay: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: Ziad Saad (with help from B Douglas Ward)
The program estimates the time delay between each voxel time series    
in a 3D+time dataset and a reference time series[1][2].                
The estimated delays are relative to the reference time series.
For example, a delay of 4 seconds means that the voxel time series 
is delayed by 4 seconds with respect to the reference time series.

                                                                       
Usage:                                                                 
3ddelay                                                                 
-input fname       fname = filename of input 3d+time dataset           
                   DO NOT USE CATENATED timeseries! Time axis is assumed
                   to be continuous and not evil.
-ideal_file rname  rname = input ideal time series file name           
   The length of the reference time series should be equal to           
     that of the 3d+time data set. 
     The reference time series vector is stored in an ascii file.        
     The programs assumes that there is one value per line and that all  
     values in the file are part of the reference vector.                
     PS: Unlike with 3dfim, and FIM in AFNI, values over 33333 are treated
     as part of the time series.                                          
-fs fs             Sampling frequency in Hz. of data time series (1/TR). 
-T  Tstim          Stimulus period in seconds. 
                   If the stimulus is not periodic, you can set Tstim to 0.
[-prefix bucket]   The prefix for the results Brick.
                   The first subbrick is for Delay.
                   The second subbrick is for Covariance, which is an 
                   estimate of the power in voxel time series at the
                   frequencies present in the reference time series.
                   The third subbrick is for the Cross Correlation 
                   Coefficients between FMRI time series and reference time
                   series. The fourth subbrick contains estimates of the
                   Variance of voxel time series. 
                   The default prefix is the prefix of the input dset 
                   with a '.DEL' extension appended to it.

[-polort order]    Detrend input time series with polynomial of order
                   'order'. If you use -1 for order then the program will
                   suggest an order for you (about 1 for each 150 seconds)
                   The minimum recommended is 1. The default is -1 for auto
                   selection. This is the same as option Nort in the plugin
                   version.
[-nodtrnd]         Equivalent to polort 0, whereby only the mean is removed.
           NOTE:   Regardless of these detrending options, No detrending is 
                   done to the reference time series.

[-uS/-uD/-uR]      Units for delay estimates. (Seconds/Degrees/Radians)
                   You can't use Degrees or Radians as units unless 
                   you specify a value for Tstim > 0.
[-phzwrp]          Delay (or phase) wrap.
                   This switch maps delays from: 
                   (Seconds) 0->T/2 to 0->T/2 and T/2->T to -T/2->0
                   (Degrees) 0->180 to 0->180 and 180->360 to -180->0
                   (Radians) 0->pi to 0->pi and pi->2pi to -pi->0
                   You can't use this option unless you specify a 
                   value for Tstim > 0.
[-nophzwrp]        Do not wrap phase (default).

[-bias]            Do not correct for the bias in the estimates [1][2]
[-nobias | -correct_bias] Do correct for the bias in the estimates
                          (default).

[-dsamp]           Correct for slice timing differences        (default).
[-nodsamp ]        Do not correct for slice timing differences .

[-mask mname]      mname = filename of 3d mask dataset                 
                   only voxels with non-zero values in the mask would be 
                   considered.                                           

[-nfirst fnum]     fnum = number of first dataset image to use in      
                     the delay estimate. (default = 0)                 
[-nlast  lnum]     lnum = number of last dataset image to use in       
                     the delay estimate. (default = last)              

[-co CCT]          Cross Correlation Coefficient threshold value.
                   This is only used to limit the ascii output (see below).

[-asc [out]]       Write the results to an ascii file for voxels with 
[-ascts [out]]     cross correlation coefficients larger than CCT.
                   If 'out' is not specified, a default name similar 
                   to the default output prefix is used.
                   -asc, only files 'out' and 'out.log' are written to disk
                   (see ahead)
                   -ascts, an additional file, 'out.ts', is written to disk
                   (see ahead)
                   There a 9 columns in 'out' which hold the following
                   values:
                    1- Voxel Index (VI) : Each voxel in an AFNI brick has a
                          unique index.
                          Indices map directly to XYZ coordinates.
                          See AFNI plugin documentations for more info.
                    2..4- Voxel coordinates (X Y Z): Those are the voxel 
                          slice coordinates. You can see these coordinates
                          in the upper left side of the AFNI window.
                          To do so, you must first switch the voxel 
                          coordinate units from mm to slice coordinates. 
                          Define Datamode -> Misc -> Voxel Coords ?
                          PS: The coords that show up in the graph window
                              may be different from those in the upper left
                              side of AFNI's main window.
                    5- Duff : A value of no interest to you. It is preserved
                              for backward compatibility.
                    6- Delay (Del) : The estimated voxel delay.
                    7- Covariance (Cov) : Covariance estimate.
                    8- Cross Correlation Coefficient (xCorCoef) : 
                          Cross Correlation Coefficient.
                    9- Variance (VTS) : Variance of voxel's time series.

                   The file 'out' can be used as an input to two plugins:
                     '4Ddump' and '3D+t Extract'

                   The log file 'out.log' contains all parameter settings 
                   used for generating the output brick. 
                   It also holds any warnings generated by the plugin.
                   Some warnings, such as 'null time series ...' , or 
                   'Could not find zero crossing ...' are harmless. '
                   I might remove them in future versions.

                   A line (L) in the file 'out.ts' contains the time series 
                   of the voxel whose results are written on line (L) in the
                   file 'out'.
                   The time series written to 'out.ts' do not contain the
                   ignored samples, they are detrended and have zero mean.

                                                                      
Random Comments/Advice:
   The longer you time series, the better. It is generally recomended that
   the largest delay be less than N/10, N being time series' length.
   The algorithm does go all the way to N/2.

   If you have/find questions/comments/bugs about the plugin, 
   send me an E-mail: saadz@mail.nih.gov

                          Ziad Saad Dec 8 00.

   [1] : Bendat, J. S. (1985). The Hilbert transform and applications 
         to correlation measurements, Bruel and Kjaer Instruments Inc.
          
   [2] : Bendat, J. S. and G. A. Piersol (1986). Random Data analysis and
         measurement procedures, John Wiley & Sons.
   Author's publications on delay estimation using the Hilbert Transform:
   [3] : Saad, Z.S., et al., Analysis and use of FMRI response delays. 
         Hum Brain Mapp, 2001. 13(2): p. 74-93.
   [4] : Saad, Z.S., E.A. DeYoe, and K.M. Ropella, Estimation of FMRI 
         Response Delays.  Neuroimage, 2003. 18(2): p. 494-504.


++ Compile date = Mar 13 2009




AFNI program: 3ddot
Usage: 3ddot [options] dset1 dset2
Output = correlation coefficient between 2 dataset bricks
         - you can use sub-brick selectors on the dsets
         - the result is a number printed to stdout
Options:
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be averaged from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
  -mrange a b  Means to further restrict the voxels from
                 'mset' so that only those mask values
                 between 'a' and 'b' (inclusive) will
                 be used.  If this option is not given,
                 all nonzero values from 'mset' are used.
                 Note that if a voxel is zero in 'mset', then
                 it won't be included, even if a < 0 < b.
  -demean      Means to remove the mean from each volume
                 prior to computing the correlation.
  -dodot       Return the dot product (unscaled).
  -docoef      Return the least square fit coefficients
                 {a,b} so that dset2 is approximately a + b*dset1
  -dosums      Return the 5 numbers xbar= ybar=
                 <(x-xbar)^2> <(y-ybar)^2> <(x-xbar)(y-ybar)>

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3ddup
Usage: 3ddup [options] dataset
 'Duplicates' a 3D dataset by making a warp-on-demand copy.
 Applications:
   - allows AFNI to resample a dataset to a new grid without
       destroying an existing data .BRIK
   - change a functional dataset to anatomical, or vice-versa

OPTIONS:
  -'type'           = Convert to the given 'type', which must be
                       chosen from the same list as in to3d
  -session dirname  = Write output into given directory (default=./)
  -prefix  pname    = Use 'pname' for the output directory prefix
                       (default=dup)
N.B.: Even if the new dataset is anatomical, it will not contain
      any markers, duplicated from the original, or otherwise.

++ Compile date = Mar 13 2009




AFNI program: 3dedge3
Usage: 3dedge3 [options] dset dset ...
Does 3D Edge detection using the library 3DEdge by;
by Gregoire Malandain (gregoire.malandain@sophia.inria.fr)

Options :
  -input iii  = Input dataset
  -verbose    = Print out some information along the way.
  -prefix ppp = Sets the prefix of the output dataset.
  -datum ddd  = Sets the datum of the output dataset.
  -fscale     = Force scaling of the output to the maximum integer range.
  -gscale     = Same as '-fscale', but also forces each output sub-brick to
                  to get the same scaling factor.
  -nscale     = Don't do any scaling on output to byte or short datasets.


References for the algorithms:
 -  Optimal edge detection using recursive filtering
    R. Deriche, International Journal of Computer Vision,
    pp 167-187, 1987.
 -  Recursive filtering and edge tracking: two primary tools
    for 3-D edge detection, O. Monga, R. Deriche, G. Malandain
    and J.-P. Cocquerez, Image and Vision Computing 4:9, 
    pp 203-214, August 1991.


++ Compile date = Mar 13 2009




AFNI program: 3dfim
++ 3dfim: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: R. W. Cox and B. D. Ward
 Program:   3dfim 

Purpose:   Calculate functional image from 3d+time data file. 
Usage:     3dfim  [-im1 num]  -input fname  -prefix name 
              -ideal fname  [-ideal fname] [-ort fname] 
 
 options are:
 -im1 num        num   = index of first image to be used in time series 
                         correlation; default is 1  
  
 -input fname    fname = filename of 3d + time data file for input
  
 -prefix name    name  = prefix of filename for saving functional data
  
 -ideal fname    fname = filename of a time series to which the image data
                         is to be correlated. 
  
 -percent p      Calculate percentage change due to the ideal time series 
                 p     = maximum allowed percentage change from baseline 
                         Note: values greater than p are set equal to p. 
  
 -ort fname      fname = filename of a time series to which the image data
                         is to be orthogonalized 
  
             N.B.: It is possible to specify more than
             one ideal time series file. Each one is separately correlated
             with the image time series and the one most highly correlated
             is selected for each pixel.  Multiple ideals are specified
             using more than one '-ideal fname' option, or by using the
             form '-ideal [ fname1 fname2 ... ]' -- this latter method
             allows the use of wildcarded ideal filenames.
             The '[' character that indicates the start of a group of
             ideals can actually be any ONE of these: [{/%
             and the ']' that ends the group can be:  ]}/%
  
             [Format of ideal time series files:
             ASCII; one number per line;
             Same number of lines as images in the time series;
             Value over 33333 --> don't use this image in the analysis]
  
             N.B.: It is also possible to specify more than
             one ort time series file.  The image time series is  
             orthogonalized to each ort time series.  Multiple orts are 
             specified by using more than one '-ort fname' option, 
             or by using the form '-ort [ fname1 fname2 ... ]'.  This 
             latter method allows the use of wildcarded ort filenames.
             The '[' character that indicates the start of a group of
             ideals can actually be any ONE of these: [{/%
             and the ']' that ends the group can be:  ]}/%
  
             [Format of ort time series files:
             ASCII; one number per line;
             At least same number of lines as images in the time series]
  
  

++ Compile date = Mar 13 2009




AFNI program: 3dfim+
++ 3dfim+: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
Program to calculate the cross-correlation of an ideal reference waveform  
with the measured FMRI time series for each voxel.                         
                                                                       
Usage:                                                                 
3dfim+                                                                 
-input fname       fname = filename of input 3d+time dataset           
[-input1D dname]   dname = filename of single (fMRI) .1D time series   
[-mask mname]      mname = filename of 3d mask dataset                 
[-nfirst fnum]     fnum = number of first dataset image to use in      
                     the cross-correlation procedure. (default = 0)    
[-nlast  lnum]     lnum = number of last dataset image to use in       
                     the cross-correlation procedure. (default = last) 
[-polort pnum]     pnum = degree of polynomial corresponding to the    
                     baseline model  (pnum = 0, 1, etc.)               
                     (default: pnum = 1). Use -1 for no baseline model.
[-fim_thr p]       p = fim internal mask threshold value (0 <= p <= 1) 
                     (default: p = 0.0999)                             
[-cdisp cval]      Write (to screen) results for those voxels          
                     whose correlation stat. > cval  (0 <= cval <= 1)  
                     (default: disabled)                               
[-ort_file sname]  sname = input ort time series file name             
-ideal_file rname  rname = input ideal time series file name           
                                                                       
            Note:  The -ort_file and -ideal_file commands may be used  
                   more than once.                                     
            Note:  If files sname or rname contain multiple columns,   
                   then ALL columns will be used as ort or ideal       
                   time series.  However, individual columns or        
                   a subset of columns may be selected using a file    
                   name specification like 'fred.1D[0,3,5]', which     
                   indicates that only columns #0, #3, and #5 will     
                   be used for input.                                  

[-out param]       Flag to output the specified parameter, where       
                   the string 'param' may be any one of the following: 
                                                                       
    Fit Coef       L.S. fit coefficient for Best Ideal                
  Best Index       Index number for Best Ideal                        
    % Change       P-P amplitude of signal response / Baseline        
    Baseline       Average of baseline model response                 
 Correlation       Best Ideal product-moment correlation coefficient  
  % From Ave       P-P amplitude of signal response / Average         
     Average       Baseline + average of signal response              
  % From Top       P-P amplitude of signal response / Topline         
     Topline       Baseline + P-P amplitude of signal response        
 Sigma Resid       Std. Dev. of residuals from best fit               
         All       This specifies all of the above parameters       
 Spearman CC       Spearman correlation coefficient                   
 Quadrant CC       Quadrant correlation coefficient                   
                                                                       
            Note:  Multiple '-out' commands may be used.               
            Note:  If a parameter name contains imbedded spaces, the   
                   entire parameter name must be enclosed by quotes,   
                   e.g.,  -out 'Fit Coef'                                   
                                                                       
[-bucket bprefix]  Create one AFNI 'bucket' dataset containing the     
                   parameters of interest, as specified by the above   
                   '-out' commands.                                    
                   The output 'bucket' dataset is written to a file    
                   with the prefix name bprefix.                       

++ Compile date = Mar 13 2009




AFNI program: 3dfractionize
Usage: 3dfractionize [options]

* For each voxel in the output dataset, computes the fraction
    of it that is occupied by nonzero voxels from the input.
* The fraction is stored as a short in the range 0..10000,
    indicating fractions running from 0..1.
* The template dataset is used only to define the output grid;
    its brick(s) will not be read into memory.  (The same is
    true of the warp dataset, if it is used.)
* The actual values stored in the input dataset are irrelevant,
    except in that they are zero or nonzero (UNLESS the -preserve
    option is used).

The purpose of this program is to allow the resampling of a mask
dataset (the input) from a fine grid to a coarse grid (defined by
the template).  When you are using the output, you will probably
want to threshold the mask so that voxels with a tiny occupancy
fraction aren't used.  This can be done in 3dmaskave, by using
3calc, or with the '-clip' option below.

Options are [the first 2 are 'mandatory options']:
  -template tset  = Use dataset 'tset' as a template for the output.
                      The output dataset will be on the same grid as
                      this dataset.

  -input iset     = Use dataset 'iset' for the input.
                      Only the sub-brick #0 of the input is used.
                      You can use the sub-brick selection technique
                      described in '3dcalc -help' to choose the
                      desired sub-brick from a multi-brick dataset.

  -prefix ppp     = Use 'ppp' for the prefix of the output.
                      [default prefix = 'fractionize']

  -clip fff       = Clip off voxels that are less than 'fff' occupied.
                      'fff' can be a number between 0.0 and 1.0, meaning
                      the fraction occupied, can be a number between 1.0
                      and 100.0, meaning the percent occupied, or can be
                      a number between 100.0 and 10000.0, meaning the
                      direct output value to use as a clip level.
                   ** Some sort of clipping is desirable; otherwise,
                        an output voxel that is barely overlapped by a
                        single nonzero input voxel will enter the mask.
                      [default clip = 0.0]

  -warp wset      = If this option is used, 'wset' is a dataset that
                      provides a transformation (warp) from +orig
                      coordinates to the coordinates of 'iset'.
                      In this case, the output dataset will be in
                      +orig coordinates rather than the coordinates
                      of 'iset'.  With this option:
                   ** 'tset' must be in +orig coordinates
                   ** 'iset' must be in +acpc or +tlrc coordinates
                   ** 'wset' must be in the same coordinates as 'iset'

  -preserve       = When this option is used, the program will copy
     or               the nonzero values of input voxels to the output
  -vote               dataset, rather than create a fractional mask.
                      Since each output voxel might be overlapped
                      by more than one input voxel, the program 'votes'
                      for which input value to preserve.  For example,
                      if input voxels with value=1 occupy 10% of an
                      output voxel, and inputs with value=2 occupy 20%
                      of the same voxel, then the output value in that
                      voxel will be set to 2 (provided that 20% is >=
                      to the clip fraction).
                   ** Voting can only be done on short-valued datasets,
                        or on byte-valued datasets.
                   ** Voting is a relatively time-consuming option,
                        since a separate loop is made through the
                        input dataset for each distinct value found.
                   ** Combining this with the -warp option does NOT
                        make a general +tlrc to +orig transformer!
                        This is because for any value to survive the
                        vote, its fraction in the output voxel must be
                        >= clip fraction, regardless of other values
                        present in the output voxel.

Sample usage:

  1. Compute the fraction of each voxel occupied by the warped input.

          3dfractionize -template grid+orig -input data+tlrc  \
                        -warp anat+tlrc -clip 0.2

  2. Apply the (inverse) -warp tranformation to transform the -input
     from +tlrc space to +orig space, storing it according to the grid
     of the -template.
     A voxel in the output dataset gets the value that occupies most of
     its volume, providing that value occupies 20% of the voxel.

     Note that the essential difference from above is '-preserve'.

          3dfractionize -template grid+orig -input data+tlrc  \
                        -warp anat+tlrc -preserve -clip 0.2   \
                        -prefix new_data

This program will also work in going from a coarse grid to a fine grid,
but it isn't clear that this capability has any purpose.
-- RWCox - February 1999
         - October 1999: added -warp and -preserve options

++ Compile date = Mar 13 2009




AFNI program: 3dhistog
Compute histogram of 3D Dataset
Usage: 3dhistog [editing options] [histogram options] dataset

The editing options are the same as in 3dmerge
 (i.e., the options starting with '-1').

The histogram options are:
  -nbin #   Means to use '#' bins [default=100]
            Special Case: for short or byte dataset bricks,
                          set '#' to zero to have the number
                          of bins set by the brick range.
  -dind i   Means to take data from sub-brick #i, rather than #0
  -omit x   Means to omit the value 'x' from the count;
              -omit can be used more than once to skip multiple values.
  -mask m   Means to use dataset 'm' to determine which voxels to use
  -doall    Means to include all sub-bricks in the calculation;
              otherwise, only sub-brick #0 (or that from -dind) is used.
  -notit    Means to leave the title line off the output.
  -log10    Output log10() of the counts, instead of the count values.
  -min x    Means specify minimum of histogram.
  -max x    Means specify maximum of histogram.
  -unq U.1D Writes out the sorted unique values to file U.1D.
            This option is not allowed for float data
            If you have a problem with this, write
            Ziad S. Saad (saadz@mail.nih.gov)

The histogram is written to stdout.  Use redirection '>' if you
want to save it to a file.  The format is a title line, then
three numbers printed per line:
  bottom-of-interval  count-in-interval  cumulative-count

-- by RW Cox (V Roopchansingh added the -mask option)

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dinfo

Prints out sort-of-useful information from a 3D dataset's header
Usage: 3dinfo [-verb OR -short] dataset [dataset ...]
  -verb means to print out lots of stuff
  -VERB means even more stuff
  -short means to print out less stuff [now the default]

Alternative Usage (without either of the above options):
  3dinfo -label2index label dataset
  * Prints to stdout the index corresponding to the sub-brick with
    the name label, or a blank line if label not found.
  * If this option is used, then the ONLY output is this sub-brick index.
    This is intended to be used in a script, as in this tcsh fragment:
      set face = `3dinfo -label2index Face#0 AA_Decon+orig`
      set hous = `3dinfo -label2index House#0 AA_Decon+orig`
      3dcalc -a AA_Decon+orig"[$face]" -b AA_Decon+orig"[$hous]" ...
  * Added per the request and efforts of Colm Connolly.

++ Compile date = Mar 13 2009




AFNI program: 3dmaskave
Usage: 3dmaskave [options] dataset

Computes average of all voxels in the input dataset
which satisfy the criterion in the options list.
If no options are given, then all voxels are included.

------------------------------------------------------------
Examples:

1. compute the average timeseries in epi_r1+orig, over voxels
   that are set (any non-zero value) in the dataset, ROI+orig:

    3dmaskave -mask ROI+orig epi_r1+orig

2. restrict the ROI to values of 3 or 4, and save (redirect)
   the output to the text file run1_roi_34.txt:

    3dmaskave -mask ROI+orig -quiet -mrange 3 4   \
              epi_r1+orig > run1_roi_34.txt
------------------------------------------------------------

Options:
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be averaged from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
               SPECIAL CASE: If 'mset' is the string 'SELF',
                             then the input dataset will be
                             used to mask itself.  That is,
                             only nonzero voxels from the
                             #miv sub-brick will be used.
  -mindex miv  Means to use sub-brick #'miv' from the mask
                 dataset.  If not given, miv=0.
  -mrange a b  Means to further restrict the voxels from
                 'mset' so that only those mask values
                 between 'a' and 'b' (inclusive) will
                 be used.  If this option is not given,
                 all nonzero values from 'mset' are used.
                 Note that if a voxel is zero in 'mset', then
                 it won't be included, even if a < 0 < b.

  -dindex div  Means to use sub-brick #'div' from the dataset.
                 If not given, all sub-bricks will be processed.
  -drange a b  Means to only include voxels from the dataset whose
                 values fall in the range 'a' to 'b' (inclusive).
                 Otherwise, all voxel values are included.

  -slices p q  Means to only included voxels from the dataset
                 whose slice numbers are in the range 'p' to 'q'
                 (inclusive).  Slice numbers range from 0 to
                 NZ-1, where NZ can be determined from the output
                 of program 3dinfo.  The default is to include
                 data from all slices.
                 [There is no provision for geometrical voxel]
                 [selection except in the slice (z) direction]

  -sigma       Means to compute the standard deviation as well
                 as the mean.
  -median      Means to compute the median instead of the mean.
  -max         Means to compute the max instead of the mean.
  -min         Means to compute the min instead of the mean.
                 (-sigma is ignored with -median, -max, or -min)
  -dump        Means to print out all the voxel values that
                 go into the average.
  -udump       Means to print out all the voxel values that
                 go into the average, UNSCALED by any internal
                 factors.
                 N.B.: the scale factors for a sub-brick
                       can be found using program 3dinfo.
  -indump      Means to print out the voxel indexes (i,j,k) for
                 each dumped voxel.  Has no effect if -dump
                 or -udump is not also used.
                 N.B.: if nx,ny,nz are the number of voxels in
                       each direction, then the array offset
                       in the brick corresponding to (i,j,k)
                       is i+j*nx+k*nx*ny.
 -q     or
 -quiet        Means to print only the minimal results.
               This is useful if you want to create a *.1D file.

The output is printed to stdout (the terminal), and can be
saved to a file using the usual redirection operation '>'.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dmaskdump
Usage: 3dmaskdump [options] dataset dataset ...
Writes to an ASCII file values from the input datasets
which satisfy the mask criteria given in the options.
If no options are given, then all voxels are included.
This might result in a GIGANTIC output file.
Options:
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be printed from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
  -mrange a b  Means to further restrict the voxels from
                 'mset' so that only those mask values
                 between 'a' and 'b' (inclusive) will
                 be used.  If this option is not given,
                 all nonzero values from 'mset' are used.
                 Note that if a voxel is zero in 'mset', then
                 it won't be included, even if a < 0 < b.
  -index       Means to write out the dataset index values.
  -noijk       Means not to write out the i,j,k values.
  -xyz         Means to write the x,y,z coordinates from
                 the 1st input dataset at the start of each
                 output line.  These coordinates are in
                 the 'RAI' (DICOM) order.
  -o fname     Means to write output to file 'fname'.
                 [default = stdout, which you won't like]

  -cmask 'opts' Means to execute the options enclosed in single
                  quotes as a 3dcalc-like program, and produce
                  produce a mask from the resulting 3D brick.
       Examples:
        -cmask '-a fred+orig[7] -b zork+orig[3] -expr step(a-b)'
                  produces a mask that is nonzero only where
                  the 7th sub-brick of fred+orig is larger than
                  the 3rd sub-brick of zork+orig.
        -cmask '-a fred+orig -expr 1-bool(k-7)'
                  produces a mask that is nonzero only in the
                  7th slice (k=7); combined with -mask, you
                  could use this to extract just selected voxels
                  from particular slice(s).
       Notes: * You can use both -mask and -cmask in the same
                  run - in this case, only voxels present in
                  both masks will be dumped.
              * Only single sub-brick calculations can be
                  used in the 3dcalc-like calculations -
                  if you input a multi-brick dataset here,
                  without using a sub-brick index, then only
                  its 0th sub-brick will be used.
              * Do not use quotes inside the 'opts' string!

  -xbox x y z   Means to put a 'mask' down at the dataset (not DICOM)
                  coordinates of 'x y z' mm.  By default, this box is
                  1 voxel wide in each direction.  You can specify
                  instead a range of coordinates using a colon ':'
                  after the coordinates; for example:
                    -xbox 22:27 31:33 44
                  means a box from (x,y,z)=(22,31,44) to (27,33,44).
           NOTE: dataset coordinates are NOT the coordinates you
                 typically see in AFNI's main controller top left corner.
                 Those coordinates are typically in either RAI/DICOM order
                 or in LPI/SPM order and should be used with -dbox and
                 -nbox, respectively.

  -dbox x y z   Means the same as -xbox, but the coordinates are in
                  RAI/DICOM order (+x=Left, +y=Posterior, +z=Superior).
                  If your AFNI environment variable AFNI_ORIENT is set to
                  RAI, these coordinates correspond to those you'd enter
                  into the 'Jump to (xyz)' control in AFNI, and to
                  those output by 3dclust.
            NOTE: It is possible to make AFNI and/or 3dclust output 
                  coordinates in an order different from the one specified 
                  by AFNI_ORIENT, but you'd have to work hard on that. 
                  In any case, the order is almost always specified along 
                  with the coordinates. If you see RAI/DICOM, then use 
                  -dbox. If you see LPI/SPM then use -nbox. 

  -nbox x y z   Means the same as -xbot, but the coordinates are in
                  LPI/SPM or 'neuroscience' order where the signs of the
                  x and y coordinates are reversed relative to RAI/DICOM.
                  (+x=Right, +y=Anterior, +z=Superior)

  -ibox i j k   Means to put a 'mask' down at the voxel indexes
                  given by 'i j k'.  By default, this picks out
                  just 1 voxel.  Again, you can use a ':' to specify
                  a range (now in voxels) of locations.
       Notes: * Boxes are cumulative; that is, if you specify more
                  than 1 box, you'll get more than one region.
              * If a -mask and/or -cmask option is used, then
                  the intersection of the boxes with these masks
                  determines which voxels are output; that is,
                  a voxel must be inside some box AND inside the
                  mask in order to be selected for output.
              * If boxes select more than 1 voxel, the output lines
                  are NOT necessarily in the order of the options on
                  the command line.
              * Coordinates (for -xbox, -dbox, and -nbox) are relative
                  to the first dataset on the command line.

  -nozero       Means to skip output of any voxel where all the
                  data values are zero.

  -n_rand N_RAND Means to keep only N_RAND randomly selected
                 voxels from what would have been the output.

  -n_randseed SEED  Seed the random number generator with SEED,
                    instead of the default seed of 1234

  -niml name    Means to output data in the XML/NIML format that
                  is compatible with input back to AFNI via
                  the READ_NIML_FILE command.
              * 'name' is the 'target_name' for the NIML header
                  field, which is the name that will be assigned
                  to the dataset when it is sent into AFNI.
              * Also implies '-noijk' and '-xyz' and '-nozero'.

  -quiet        Means not to print progress messages to stderr.

Inputs after the last option are datasets whose values you
want to be dumped out.  These datasets (and the mask) can
use the sub-brick selection mechanism (described in the
output of '3dcalc -help') to choose which values you get.

Each selected voxel gets one line of output:
  i j k val val val ....
where (i,j,k) = 3D index of voxel in the dataset arrays,
and val = the actual voxel value.  Note that if you want
the mask value to be output, you have to include that
dataset in the dataset input list again, after you use
it in the '-mask' option.

* To eliminate the 'i j k' columns, use the '-noijk' option.
* To add spatial coordinate columns, use the '-xyz' option.

N.B.: This program doesn't work with complex-valued datasets!

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dmatcalc
Usage: 3dmatcalc [options]
Apply a matrix to a dataset, voxel-by-voxel, to produce a new
dataset.

* If the input dataset has 'N' sub-bricks, and the input matrix
   is 'MxN', then the output dataset will have 'M' sub-bricks; the
   results in each voxel will be the result of extracting the N
   values from the input at that voxel, multiplying the resulting
   N-vector by the matrix, and output the resulting M-vector.

* If the input matrix has 'N+1' columns, then it will be applied
   to an (N+1)-vector whose first N elements are from the dataset
   and the last value is 1.  This convention allows the addition
   of a constant vector (the last row of the matrix) to each voxel.
* The output dataset is always stored in float format.
* Useful applications are left to your imagination.  The example
   below is pretty fracking hopeless.  Something more useful might
   be to project a 3D+time dataset onto some subspace, then run
   3dpc on the results.


OPTIONS:
-------
 -input ddd  = read in dataset 'ddd'  [required option]
 -matrix eee = specify matrix, which can be done as a .1D file
                or as an expression in the syntax of 1dmatcalc
                [required option]
 -prefix ppp = write to dataset with prefix 'ppp'
 -mask mmm   = only apply to voxels in the mask; other voxels
                will be set to all zeroes

EXAMPLE:
-------
Assume dataset 'v+orig' has 50 sub-bricks:
 3dmatcalc -input v+orig -matrix '&read(1D:50@1,\,50@0.02) &transp' -prefix w
The -matrix option computes a 2x50 matrix, whose first row is all 1's
and whose second row is all 0.02's.  Thus, the output dataset w+orig has
2 sub-bricks, the first of which is the voxel-wise sum of all 50 inputs,
and the second is the voxel-wise average (since 0.02=1/50).

-- Zhark, Emperor -- April 2006

++ Compile date = Mar 13 2009




AFNI program: 3dmatmult
-------------------------------------------------------------------------
Multiply AFNI datasets slice-by-slice as matrices.

If dataset A has Ra rows and Ca columns (per slice), and dataset B has
Rb rows and Cb columns (per slice), multiply each slice pair as matrices
to obtain a dataset with Ra rows and Cb columns.  Here Ca must equal Rb
and the number of slices must be equal.

In practice the first dataset will probably be a transformation matrix
(or a sequence of them) while the second dataset might just be an image.
For this reason, the output dataset will be based on inputB.

----------------------------------------
examples:

    3dmatmult -inputA matrix+orig -inputB image+orig -prefix transformed

    3dmatmult -inputA matrix+orig -inputB image+orig  \
              -prefix transformed -datum float -verb 2

----------------------------------------
informational command arguments (execute option and quit):

    -help                   : show this help
    -hist                   : show program history
    -ver                    : show program version

----------------------------------------
required command arguments:

    -inputA DSET_A          : specify first (matrix) dataset

        The slices of this dataset might be transformation matrices.

    -inputB DSET_B          : specify second (matrix) dataset

        This dataset might be any image.

    -prefix PREFIX          : specify output dataset prefix

        This will be the name of the product (output) dataset.

----------------------------------------
optional command arguments:

    -datum TYPE             : specify verbosity level

        Valid TYPEs are 'byte', 'short' and 'float'.  The default is
        that of the inputB dataset.

    -verb LEVEL             : specify verbosity level

        The default level is 1, while 0 is considered 'quiet'.

----------------------------------------
* If you need to re-orient a 3D dataset so that the rows, columns
  and slices are correct for 3dmatmult, you can use the one of the
  programs 3daxialize or 3dresample for this purpose.

* To multiply a constant matrix into a vector at each voxel, the
  program 3dmatcalc is the proper tool.

----------------------------------------------------------------------
R. Reynolds    (requested by W. Gaggl)

3dmatmult version 0.0, 29 September 2008
compiled: Mar 13 2009




AFNI program: 3dmaxima
3dmaxima - used to locate extrema in a functional dataset.

   This program reads a functional dataset and locates any relative extrema
   (maximums or minimums, depending on the user option).  A _relative_
   maximum is a point that is greater than all neighbors (not necessarily
   greater than all other values in the sub-brick).  The output from this
   process can be text based (sent to the terminal window) and it can be a
   mask (integral) dataset, where the locations of the extrema are set.

   When writing a dataset, it is often useful to set a sphere around each
   extrema, not to just set individual voxels.  This makes viewing those
   locations much more reasonable.  Also, if the 'Sphere Values' option is
   set to 'N to 1', the sphere around the most extreme voxel will get the
   value N, giving it the 'top' color in afni (and so on, down to 1).

   Notes : The only required option is the input dataset.
           Input datasets must be of type short.
           All distances are in voxel units.

----------------------------------------------------------------------
                        ***  Options  ***

-----  Input Dset:  -----

   -input DSET           : specify input dataset

         e.g. -input func+orig'[7]'

       Only one sub-brick may be specified.  So if a dataset has multiple
       sub-bricks, the [] selector must be used.

-----  Output Dset:  -----

   -prefix PREFIX        : prefix for an output mask dataset

         e.g. -prefix maskNto1

       This dataset may be viewed as a mask.  It will have a value set at
       the location of any selected extrema.  The -out_rad option can be
       used to change those points to 'spheres'.

   -spheres_1            : [flag] set all output values to 1

       This is the default, which sets all values in the output dataset
       to 1.  This is for the extreme points, and for the spheres centered
       around them.

   -spheres_1toN         : [flag] output values will range from 1 to N

       In this case, the most extreme voxel will be set with a value of 1.
       The next most extreme voxel will get 2, and so on.

   -spheres_Nto1         : [flag] output values will range from N to 1

       With this option, the highest extrema will be set to a value of N,
       where N equals the number of reported extrema.  The advantage of
       this is that the most extreme point will get the highest color in
       afni.

-----  Threshold:  -----

   -thresh CUTOFF        : provides a cutoff value for extrema

         e.g. -thresh 17.4

       Extrema not meeting this cutoff will not be considered.
       Note that if the '-neg_ext' option is applied, the user
       will generally want a negative threshold.

-----  Separation:  -----

   -min_dist VOXELS      : minimum acceptable distance between extrema

         e.g. -min_dist 4

       Less significant extrema which are close to more significant extrema
       will be discounted in some way, depending on the 'neighbor style'
       options.

       See '-n_style_sort' and '-n_style_weight_ave' for more information.

       Note that the distance is in voxels, not mm.

-----  Output Size:  -----

   -out_rad SIZE         : set the output radius around extrema voxels

         e.g. -out_rad 9

       If the user wants the output BRIK to consist of 'spheres' centered
       at extrema points, this option can be used to set the radius for
       those spheres.  Note again that this is in voxel units.

-----  Neighbor:  -----

   If extrema are not as far apart as is specified by the '-min_dist'
   option, the neighbor style options specify how to handle the points.

   -n_style_sort         : [flag] use 'Sort-n-Remove' style (default)

       The extrema are sorted by magnitude.  For each extrema (which has
       not previously removed), all less significant extrema neighbors
       within the separation radius (-min_dist) are removed.

       See '-min_dist' for more information.

   -n_style_weight_ave   : [flag] use 'Weighted-Average' style

       Again, traverse the sorted list of extrema.  Replace the current
       extrema with the center of mass of all extrema within the Separation
       radius of the current point, removing all others within this radius.

       This should not change the number of extrema, it should only shift
       the locations.

-----  Params:  -----

   -neg_ext              : [flag] search for negative extrema (minima)

       This will search for the minima of the dataset.
       Note that a negative threshold may be desired.

   -true_max             : [flag] extrema may not have equal neighbors

       By default, points may be considered extrema even if they have a
       neighbor with the same value.  This flag option requires extrema
       to be strictly greater than any of their neighbors.

       With this option, extrema locations that have neighbors at the same
       value are ignored.

-----  Output Text:  -----

   -debug LEVEL          : output extra information to the terminal

       e.g. -debug 2

   -no_text              : [flag] do not display the extrma points as text

   -coords_only          : [flag] only output coordinates (no text or vals)

-----  Output Coords:  -----

   -dset_coords          : [flag] display output in the dataset orientation

       By default, the xyz-coordinates are displayed in DICOM orientation
       (RAI), i.e. right, anterior and inferior coordinates are negative,
       and they are printed in that order (RL, then AP, then IS).

       If this flag is set, the dataset orientation is used, whichever of
       the 48 it happens to be.

       Note that in either case, the output orientation is printed above
       the results in the terminal window, to remind the user.

-----  Other :  -----

   -help                 : display this help

   -hist                 : display module history

   -ver                  : display version number

Author: R Reynolds




AFNI program: 3dmerge
Program 3dmerge 
This program has 2 different functions:
 (1) To edit 3D datasets in various ways (threshold, blur, cluster, ...);
 (2) To merge multiple datasets in various ways (average, max, ...).
Either or both of these can be applied.

The 'editing' operations are controlled by options that start with '-1',
which indicates that they apply to individual datasets
(e.g., '-1blur_fwhm').

The 'merging' operations are controlled by options that start with '-g',
which indicate that they apply to the entire group of input datasets
(e.g., '-gmax').

----------------------------------------------------------------------
Usage: 3dmerge [options] datasets ...

Examples:

  1. Apply a 4.0mm FWHM Gaussian blur to EPI run 7.

       3dmerge -1blur_fwhm 4.0 -doall -prefix e1.run7_blur run7+orig

* These examples are based on a data grid of 3.75 x 3.75 x 3.5, in mm.
  So a single voxel has a volume of ~49.22 mm^3 (mvul), and a 40 voxel
  cluster has a volume of ~1969 mm^3 (as used in some examples).

  2. F-stat only:

     Cluster based on a threshold of F=10 (F-stats are in sub-brick #0),
     and require a volume of 40 voxels (1969 mm^3).  The output will be
     the same F-stats as in the input, but subject to the threshold and
     clustering.

       3dmerge -1clust 3.76 1969 -1thresh 10.0    \
               -prefix e2.f10 stats+orig'[0]'

  3. F-stat only:

     Perform the same clustering (as in #2), but apply the radius and
     cluster size in terms of cubic millimeter voxels (as if the voxels
     were 1x1x1).  So add '-dxyz=1', and adjust rmm and mvul.

       3dmerge -dxyz=1 -1clust 1 40 -1thresh 10.0    \
               -prefix e3.f10 stats+orig'[0]'

  4. t-stat and beta weight:

     For some condition, our beta weight is in sub-brick #4, with the
     corresponding t-stat in sub-brick #5.  Cluster based on 40 voxels
     and a t-stat threshold of 3.25.  Output the data from the beta
     weights, not the t-stats.

       3dmerge -dxyz=1 -1clust 1 40 -1thresh 3.25    \
               -1tindex 5 -1dindex 4                 \
               -prefix e4.t3.25 stats+orig

  5. t-stat mask:

     Apply the same threshold and cluster as in #4, but output a mask.
     Since there are 5 clusters found in this example, the values in
     the mask will be from 1 to 5, representing the largest cluster to
     the smallest.  Use -1clust_order on sub-brick 5.

       3dmerge -dxyz=1 -1clust_order 1 40 -1thresh 3.25    \
               -prefix e5.mask5 stats+orig'[5]'

     Note: this should match the 3dclust output from:

       3dclust -1thresh 3.25 -dxyz=1 1 40 stats+orig'[5]'

----------------------------------------------------------------------
EDITING OPTIONS APPLIED TO EACH INPUT DATASET:
  -1thtoin         = Copy threshold data over intensity data.
                       This is only valid for datasets with some
                       thresholding statistic attached.  All
                       subsequent operations apply to this
                       substituted data.
  -2thtoin         = The same as -1thtoin, but do NOT scale the
                       threshold values from shorts to floats when
                       processing.  This option is only provided
                       for compatibility with the earlier versions
                       of the AFNI package '3d*' programs.
  -1noneg          = Zero out voxels with negative intensities
  -1abs            = Take absolute values of intensities
  -1clip val       = Clip intensities in range (-val,val) to zero
  -2clip v1 v2     = Clip intensities in range (v1,v2) to zero
  -1uclip val      = These options are like the above, but do not apply
  -2uclip v1 v2        any automatic scaling factor that may be attached
                       to the data.  These are for use only in special
                       circumstances.  (The 'u' means 'unscaled'.  Program
                       '3dinfo' can be used to find the scaling factors.)
               N.B.: Only one of these 'clip' options can be used; you cannot
                       combine them to have multiple clipping executed.
  -1thresh thr     = Use the threshold data to censor the intensities
                       (only valid for 'fith', 'fico', or 'fitt' datasets)
                       (or if the threshold sub-brick is set via -1tindex)
               N.B.: The value 'thr' is floating point, in the range
                           0.0 < thr < 1.0  for 'fith' and 'fico' datasets,
                       and 0.0 < thr < 32.7 for 'fitt' datasets.
  -2thresh t1 t2   = Zero out voxels where the threshold sub-brick value
                       lies between 't1' and 't2' (exclusive).  If t1=-t2,
                       is the same as '-1thresh t2'.
  -1blur_sigma bmm = Gaussian blur with sigma = bmm (in mm)
  -1blur_rms bmm   = Gaussian blur with rms deviation = bmm
  -1blur_fwhm bmm  = Gaussian blur with FWHM = bmm
  -t1blur_sigma bmm= Gaussian blur of threshold with sigma = bmm(in mm)
  -t1blur_rms bmm  = Gaussian blur of threshold with rms deviation = bmm
  -t1blur_fwhm bmm = Gaussian blur of threshold with FWHM = bmm
  -1zvol x1 x2 y1 y2 z1 z2
                   = Zero out entries inside the 3D volume defined
                       by x1 <= x <= x2, y1 <= y <= y2, z1 <= z <= z2 ;
               N.B.: The ranges of x,y,z in a dataset can be found
                       using the '3dinfo' program. Dimensions are in mm.
               N.B.: This option may not work correctly at this time, but
                       I've not figured out why!

 CLUSTERING
  -dxyz=1  = In the cluster editing options, the spatial clusters
             are defined by connectivity in true 3D distance, using
             the voxel dimensions recorded in the dataset header.
             This option forces the cluster editing to behave as if
             all 3 voxel dimensions were set to 1 mm.  In this case,
             'rmm' is then the max number of grid cells apart voxels
             can be to be considered directly connected, and 'vmul'
             is the min number of voxels to keep in the cluster.
       N.B.: The '=1' is part of the option string, and can't be
             replaced by some other value.  If you MUST have some
             other value for voxel dimensions, use program 3drefit.
 
  The following cluster options are mutually exclusive: 
  -1clust rmm vmul = Form clusters with connection distance rmm
                       and clip off data not in clusters of
                       volume at least vmul microliters
  -1clust_mean rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the average
                            intensity of the cluster. 
  -1clust_max rmm vmul  = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the maximum
                            intensity of the cluster. 
  -1clust_amax rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the maximum
                            absolute intensity of the cluster. 
  -1clust_smax rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the maximum
                            signed intensity of the cluster. 
  -1clust_size rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the size 
                            of the cluster (in multiples of vmul).   
  -1clust_order rmm vmul= Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the cluster
                            size index (largest cluster=1, next=2, ...).
 * If rmm is given as 0, this means to use the 6 nearest neighbors to
     form clusters of nonzero voxels.
 * If vmul is given as zero, then all cluster sizes will be accepted
     (probably not very useful!).
 * If vmul is given as negative, then abs(vmul) is the minimum number
     of voxels to keep.
 
  The following commands produce erosion and dilation of 3D clusters.  
  These commands assume that one of the -1clust commands has been used.
  The purpose is to avoid forming strange clusters with 2 (or more)    
  main bodies connected by thin 'necks'.  Erosion can cut off the neck.
  Dilation will minimize erosion of the main bodies.                   
  Note:  Manipulation of values inside a cluster (-1clust commands)    
         occurs AFTER the following two commands have been executed.   
  -1erode pv    For each voxel, set the intensity to zero unless pv %  
                of the voxels within radius rmm are nonzero.           
  -1dilate      Restore voxels that were removed by the previous       
                command if there remains a nonzero voxel within rmm.   
 
  The following filter options are mutually exclusive: 
  -1filter_mean rmm   = Set each voxel to the average intensity of the 
                          voxels within a radius of rmm. 
  -1filter_nzmean rmm = Set each voxel to the average intensity of the 
                          non-zero voxels within a radius of rmm. 
  -1filter_max rmm    = Set each voxel to the maximum intensity of the 
                          voxels within a radius of rmm. 
  -1filter_amax rmm   = Set each voxel to the maximum absolute intensity
                          of the voxels within a radius of rmm. 
  -1filter_smax rmm   = Set each voxel to the maximum signed intensity 
                          of the voxels within a radius of rmm. 
  -1filter_aver rmm   = Same idea as '_mean', but implemented using a
                          new code that should be faster.
 
  The following threshold filter options are mutually exclusive: 
  -t1filter_mean rmm   = Set each correlation or threshold voxel to the 
                          average of the voxels within a radius of rmm. 
  -t1filter_nzmean rmm = Set each correlation or threshold voxel to the 
                          average of the non-zero voxels within 
                          a radius of rmm. 
  -t1filter_max rmm    = Set each correlation or threshold voxel to the 
                          maximum of the voxels within a radius of rmm. 
  -t1filter_amax rmm   = Set each correlation or threshold voxel to the 
                          maximum absolute intensity of the voxels 
                          within a radius of rmm. 
  -t1filter_smax rmm   = Set each correlation or threshold voxel to the 
                          maximum signed intensity of the voxels 
                          within a radius of rmm. 
  -t1filter_aver rmm   = Same idea as '_mean', but implemented using a
                          new code that should be faster.
 
  -1mult factor    = Multiply intensities by the given factor
  -1zscore         = If the sub-brick is labeled as a statistic from
                     a known distribution, it will be converted to
                     an equivalent N(0,1) deviate (or 'z score').
                     If the sub-brick is not so labeled, nothing will
                     be done.

The above '-1' options are carried out in the order given above,
regardless of the order in which they are entered on the command line.

N.B.: The 3 '-1blur' options just provide different ways of
      specifying the radius used for the blurring function.
      The relationships among these specifications are
         sigma = 0.57735027 * rms = 0.42466090 * fwhm
      The requisite convolutions are done using FFTs; this is by
      far the slowest operation among the editing options.

OTHER OPTIONS:
  -datum type = Coerce the output data to be stored as the given type,
                  which may be byte, short, or float.
          N.B.: Byte data cannot be negative.  If this datum type is chosen,
                  any negative values in the edited and/or merged dataset
                  will be set to zero.
  -keepthr    = When using 3dmerge to edit exactly one dataset of a
                  functional type with a threshold statistic attached,
                  normally the resulting dataset is of the 'fim'
                  (intensity only) type.  This option tells 3dmerge to
                  copy the threshold data (unedited in any way) into
                  the output dataset.
          N.B.: This option is ignored if 3dmerge is being used to
                  combine 2 or more datasets.
          N.B.: The -datum option has no effect on the storage of the
                  threshold data.  Instead use '-thdatum type'.

  -doall      = Apply editing and merging options to ALL sub-bricks 
                  uniformly in a dataset.
          N.B.: All input datasets must have the same number of sub-bricks
                  when using the -doall option. 
          N.B.: The threshold specific options (such as -1thresh, 
                  -keepthr, -tgfisher, etc.) are not compatible with 
                  the -doall command.  Neither are the -1dindex or
                  the -1tindex options.
          N.B.: All labels and statistical parameters for individual 
                  sub-bricks are copied from the first dataset.  It is 
                  the responsibility of the user to verify that these 
                  are appropriate.  Note that sub-brick auxiliary data 
                  can be modified using program 3drefit. 

  -1dindex j  = Uses sub-brick #j as the data source , and uses sub-brick
  -1tindex k  = #k as the threshold source.  With these, you can operate
                  on any given sub-brick of the inputs dataset(s) to produce
                  as output a 1 brick dataset.  If desired, a collection
                  of 1 brick datasets can later be assembled into a
                  multi-brick bucket dataset using program '3dbucket'
                  or into a 3D+time dataset using program '3dTcat'.
          N.B.: If these options aren't used, j=0 and k=1 are the defaults

  The following option allows you to specify a mask dataset that
  limits the action of the 'filter' options to voxels that are
  nonzero in the mask:

  -1fmask mset = Read dataset 'mset' (which can include a
                  sub-brick specifier) and use the nonzero
                  voxels as a mask for the filter options.
                  Filtering calculations will not use voxels
                  that are outside the mask.  If an output
                  voxel does not have ANY masked voxels inside
                  the rmm radius, then that output voxel will
                  be set to 0.
         N.B.: * Only the -1filter_* and -t1filter_* options are
                 affected by -1fmask.
               * Voxels NOT in the fmask will be set to zero in the
                 output when the filtering occurs.  THIS IS NEW BEHAVIOR,
                 as of 11 Oct 2007.  Previously, voxels not in the fmask,
                 but within 'rmm' of a voxel in the mask, would get a
                 nonzero output value, as those nearby voxels would be
                 combined (via whatever '-1f...' option was given).
               * If you wish to restore this old behavior, where non-fmask
                 voxels can get nonzero output, then use the new option
                 '-1fm_noclip' in addition to '-1fmask'. The two comments
                 below apply to the case where '-1fm_noclip' is given!
                 * In the linear averaging filters (_mean, _nzmean,
                   and _expr), voxels not in the mask will not be used
                   or counted in either the numerator or denominator.
                   This can give unexpected results if you use '-1fm_noclip'.
                   For example, if the mask is designed to exclude the volume
                   outside the brain, then voxels exterior to the brain,
                   but within 'rmm', will have a few voxels inside the brain
                   included in the filtering.  Since the sum of weights (the
                   denominator) is only over those few intra-brain
                   voxels, the effect will be to extend the significant
                   part of the result outward by rmm from the surface
                   of the brain.  In contrast, without the mask, the
                   many small-valued voxels outside the brain would
                   be included in the numerator and denominator sums,
                   which would barely change the numerator (since the
                   voxel values are small outside the brain), but would
                   increase the denominator greatly (by including many
                   more weights).  The effect in this case (no -1fmask)
                   is to make the filtering taper off gradually in the
                   rmm-thickness shell around the brain.
                 * Thus, if the -1fmask is intended to clip off non-brain
                   data from the filtering, its use should be followed by
                   masking operation using 3dcalc:
   3dmerge -1filter_aver 12 -1fm_noclip -1fmask mask+orig -prefix x input+orig
   3dcalc  -a x -b mask+orig -prefix y -expr 'a*step(b)'
   rm -f x+orig.*
                 The desired result is y+orig - filtered using only
                 brain voxels (as defined by mask+orig), and with
                 the output confined to the brain voxels as well.

  The following option allows you to specify an almost arbitrary
  weighting function for 3D linear filtering:

  -1filter_expr rmm expr
     Defines a linear filter about each voxel of radius 'rmm' mm.
     The filter weights are proportional to the expression evaluated
     at each voxel offset in the rmm neighborhood.  You can use only
     these symbols in the expression:
         r = radius from center
         x = dataset x-axis offset from center
         y = dataset y-axis offset from center
         z = dataset z-axis offset from center
         i = x-axis index offset from center
         j = y-axis index offset from center
         k = z-axis index offset from center
     Example:
       -1filter_expr 12.0 'exp(-r*r/36.067)'
     This does a Gaussian filter over a radius of 12 mm.  In this
     example, the FWHM of the filter is 10 mm. [in general, the
     denominator in the exponent would be 0.36067 * FWHM * FWHM.
     This is one way to get a Gaussian blur combined with the
     -1fmask option.  The radius rmm=12 is chosen where the weights
     get smallish.]  Another example:
       -1filter_expr 20.0 'exp(-(x*x+16*y*y+z*z)/36.067)'
     which is a non-spherical Gaussian filter.

  ** For shorthand, you can also use the new option (11 Oct 2007)
  -1filter_blur fwhm
        which is equivalent to
  -1filter_expr 1.3*fwhm 'exp(-r*r/(.36067*fwhm*fwhm)'
        and will implement a Gaussian blur.  The only reason to do
        Gaussian blurring this way is if you also want to use -1fmask!

  The following option lets you apply a 'Winsor' filter to the data:

  -1filter_winsor rmm nw
     The data values within the radius rmm of each voxel are sorted.
     Suppose there are 'N' voxels in this group.  We index the
     sorted voxels as s[0] <= s[1] <= ... <= s[N-1], and we call the
     value of the central voxel 'v' (which is also in array s[]).
                 If v < s[nw]    , then v is replaced by s[nw]
       otherwise If v > s[N-1-nw], then v is replace by s[N-1-nw]
       otherwise v is unchanged
     The effect is to increase 'too small' values up to some
     middling range, and to decrease 'too large' values.
     If N is odd, and nw=(N-1)/2, this would be a median filter.
     In practice, I recommend that nw be about N/4; for example,
       -dxyz=1 -1filter_winsor 2.5 19
     is a filter with N=81 that gives nice results.
   N.B.: This option is NOT affected by -1fmask
   N.B.: This option is slow! and experimental.

  The following option returns a rank value at each voxel in 
  the input dataset.
  -1rank 
     If the input voxels were, say, 12  45  9  0  9  12  0
     the output would be             2   3  1  0  1   2  0
     This option is handy for turning FreeSurfer's segmentation
     volumes to ROI volumes that can be easily colorized with AFNI.
     For example:
     3dmerge -1rank -prefix aparc+aseg_rank aparc+aseg.nii 
     To view aparc+aseg_rank+orig, use the ROI_128 colormap
     and set the colorbar range to 128.
     The -1rank option also outputs a 1D file that contains 
     the mapping from the input dataset to the ranked output.

MERGING OPTIONS APPLIED TO FORM THE OUTPUT DATASET:
 [That is, different ways to combine results. The]
 [following '-g' options are mutually exclusive! ]
  -gmean     = Combine datasets by averaging intensities
                 (including zeros) -- this is the default
  -gnzmean   = Combine datasets by averaging intensities
                 (not counting zeros)
  -gmax      = Combine datasets by taking max intensity
                 (e.g., -7 and 2 combine to 2)
  -gamax     = Combine datasets by taking max absolute intensity
                 (e.g., -7 and 2 combine to 7)
  -gsmax     = Combine datasets by taking max signed intensity
                 (e.g., -7 and 2 combine to -7)
  -gcount    = Combine datasets by counting number of 'hits' in
                  each voxel (see below for defintion of 'hit')
  -gorder    = Combine datasets in order of input:
                * If a voxel is nonzero in dataset #1, then
                    that value goes into the voxel.
                * If a voxel is zero in dataset #1 but nonzero
                    in dataset #2, then the value from #2 is used.
                * And so forth: the first dataset with a nonzero
                    entry in a given voxel 'wins'
  -gfisher   = Takes the arctanh of each input, averages these,
                  and outputs the tanh of the average.  If the input
                  datum is 'short', then input values are scaled by
                  0.0001 and output values by 10000.  This option
                  is for merging bricks of correlation coefficients.

  -nscale    = If the output datum is shorts, don't do the scaling
                  to the max range [similar to 3dcalc's -nscale option]

MERGING OPERATIONS APPLIED TO THE THRESHOLD DATA:
 [That is, different ways to combine the thresholds.  If none of these ]
 [are given, the thresholds will not be merged and the output dataset  ]
 [will not have threshold data attached.  Note that the following '-tg']
 [command line options are mutually exclusive, but are independent of  ]
 [the '-g' options given above for merging the intensity data values.  ]
  -tgfisher  = This option is only applicable if each input dataset
                  is of the 'fico' or 'fith' types -- functional
                  intensity plus correlation or plus threshold.
                  (In the latter case, the threshold values are
                  interpreted as correlation coefficients.)
                  The correlation coefficients are averaged as
                  described by -gfisher above, and the output
                  dataset will be of the fico type if all inputs
                  are fico type; otherwise, the output datasets
                  will be of the fith type.
         N.B.: The difference between the -tgfisher and -gfisher
                  methods is that -tgfisher applies to the threshold
                  data stored with a dataset, while -gfisher
                  applies to the intensity data.  Thus, -gfisher
                  would normally be applied to a dataset created
                  from correlation coefficients directly, or from
                  the application of the -1thtoin option to a fico
                  or fith dataset.

OPTIONAL WAYS TO POSTPROCESS THE COMBINED RESULTS:
 [May be combined with the above methods.]
 [Any combination of these options may be used.]
  -ghits count     = Delete voxels that aren't !=0 in at least
                       count datasets (!=0 is a 'hit')
  -gclust rmm vmul = Form clusters with connection distance rmm
                       and clip off data not in clusters of
                       volume at least vmul microliters

The '-g' and '-tg' options apply to the entire group of input datasets.

OPTIONS THAT CONTROL THE NAMES OF THE OUTPUT DATASET:
  -session dirname  = write output into given directory (default=./)
  -prefix  pname    = use 'pname' for the output dataset prefix
                       (default=mrg)

NOTES:
 **  If only one dataset is read into this program, then the '-g'
       options do not apply, and the output dataset is simply the
       '-1' options applied to the input dataset (i.e., edited).
 **  A merged output dataset is ALWAYS of the intensity-only variety.
 **  You can combine the outputs of 3dmerge with other sub-bricks
       using the program 3dbucket.
 **  Complex-valued datasets cannot be merged.
 **  This program cannot handle time-dependent datasets without -doall.
 **  Note that the input datasets are specified by their .HEAD files,
       but that their .BRIK files must exist also!

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

 ** Input datasets using sub-brick selectors are treated as follows:
      - 3D+time if the dataset is 3D+time and more than 1 brick is chosen
      - otherwise, as bucket datasets (-abuc or -fbuc)
       (in particular, fico, fitt, etc. datasets are converted to fbuc)
 ** If you are NOT using -doall, and choose more than one sub-brick
     with the selector, then you may need to use -1dindex to further
     pick out the sub-brick on which to operate (why you would do this
     I cannot fathom).  If you are also using a thresholding operation
     (e.g., -1thresh), then you also MUST use -1tindex to choose which
     sub-brick counts as the 'threshold' value.  When used with sub-brick
     selection, 'index' refers the dataset AFTER it has been read in:
          -1dindex 1 -1tindex 3 'dset+orig[4..7]'
     means to use the #5 sub-brick of dset+orig as the data for merging
     and the #7 sub-brick of dset+orig as the threshold values.
 ** The above example would better be done with
          -1tindex 1 'dset+orig[5,7]'
     since the default data index is 0. (You would only use -1tindex if
     you are actually using a thresholding operation.)
 ** -1dindex and -1tindex apply to all input datasets.

++ Compile date = Mar 13 2009




AFNI program: 3dnewid
Assigns a new ID code to a dataset; this is useful when making
a copy of a dataset, so that the internal ID codes remain unique.

Usage: 3dnewid dataset [dataset ...]
 or
       3dnewid -fun [n]
       to see what n randomly generated ID codes look like.
       (If the integer n is not present, 1 ID code is printed.)

How ID codes are created (here and in other AFNI programs):
----------------------------------------------------------
The AFNI ID code generator attempts to create a globally unique
string identifier, using the following steps.
1) A long string is created from the system identifier
   information ('uname -a'), the current epoch time in seconds
   and microseconds, the process ID, and the number of times
   the current process has called the ID code function.
2) This string is then hashed into a 128 bit code using the
   MD5 algorithm. (cf. file thd_md5.c)
3) This bit code is then converted to a 22 character string
   using Base64 encoding, replacing '/' with '-' and '+' with '_'.
   With these changes, the ID code can be used as a Unix filename
   or an XML name string. (cf. file thd_base64.c)
4) A 4 character prefix is attached at the beginning to produce
   the final ID code.  If you set the environment variable
   IDCODE_PREFIX to something, then its first 3 characters and an
   underscore will be used for the prefix of the new ID code,
   provided that the first character is alphabetic and the other
   2 alphanumeric; otherwise, the default prefix 'NIH_' will be
   used.
The source code is function UNIQ_idcode() in file niml.c.

++ Compile date = Mar 13 2009




AFNI program: 3dnoise
Usage: 3dnoise [-blast] [-snr fac] [-nl x ] datasets ...
Estimates noise level in 3D datasets, and optionally
set voxels below the noise threshold to zero.
This only works on datasets that are stored as shorts,
and whose elements are all nonnegative.
  -blast   = Set values at or below the cutoff to zero.
               In 3D+time datasets, a spatial location
               is set to zero only if a majority of time
               points fall below the cutoff; in that case
               all the values at that location are zeroed.
  -snr fac = Set cutoff to 'fac' times the estimated
               noise level.  Default fac = 2.5.  What to
               use for this depends strongly on your MRI
               system -- I often use 5, but our true SNR
               is about 100 for EPI.
  -nl x    = Set the noise level to 'x', skipping the
               estimation procedure.  Also sets fac=1.0.
               You can use program 3dClipLevel to get an
               estimate of a value for 'x'.
Author -- RW Cox

++ Compile date = Mar 13 2009




AFNI program: 3dnvals
Usage: 3dnvals [-all] [-verbose] dataset
Prints out the number of sub-bricks in a 3D dataset
If -all is specified, prints out all 4 dimensions
Nx, Ny, Nz, Nvals

++ Compile date = Mar 13 2009




AFNI program: 3dpc
Principal Component Analysis of 3D Datasets
Usage: 3dpc [options] dataset dataset ...

Each input dataset may have a sub-brick selector list.
Otherwise, all sub-bricks from a dataset will be used.

OPTIONS:
  -dmean        = remove the mean from each input brick (across space)
  -vmean        = remove the mean from each input voxel (across bricks)
                    [N.B.: -dmean and -vmean are mutually exclusive]
                    [default: don't remove either mean]
  -vnorm        = L2 normalize each input voxel time series
                    [occurs after the de-mean operations above,]
                    [and before the brick normalization below. ]
  -normalize    = L2 normalize each input brick (after mean subtraction)
                    [default: don't normalize]
  -pcsave sss   = 'sss' is the number of components to save in the output;
                    it can't be more than the number of input bricks
                    [default = none of them]
                  * To get all components, set 'sss' to a very large
                    number (more than the time series length), like 99999
  -reduce r pp  = Compute a 'dimensionally reduced' dataset with the top
                    'r' eigenvalues and write to disk in dataset 'pp'
                    [default = don't compute this at all]
                  * If '-vmean' is given, then each voxel's mean will
                    be added back into the reduced time series.  If you
                    don't want this behaviour, you could remove the mean
                    with 3dDetrend before running 3dpc.
                  * On the other hand, the effects of '-vnorm' and '-dmean'
                    and '-normalize' are not reversed in this output
                    (at least at present -- send some cookies and we'll talk).
  -prefix pname = Name for output dataset (will be a bucket type);
                  * Also, the eigen-timeseries will be in 'pname'.1D
                    (all of them) and in 'pnameNN.1D' for eigenvalue
                    #NN individually (NN=00 .. 'sss'-1, corresponding
                    to the brick index in the output dataset)
                  * The eigenvalues will be printed to file 'pname'_eig.1D
                    All eigenvalues are printed, regardless of '-pcsave'.
                    [default value of pname = 'pc']
  -1ddum ddd    = Add 'ddd' dummy lines to the top of each *.1D file.
                    These lines will have the value 999999, and can
                    be used to align the files appropriately.
                    [default value of ddd = 0]
  -verbose      = Print progress reports during the computations
  -quiet        = Don't print progress reports [the default]
  -eigonly      = Only compute eigenvalues, then
                    write them to 'pname'_eig.1D, and stop.
  -float        = Save eigen-bricks as floats
                    [default = shorts, scaled so that |max|=10000]
  -mask mset    = Use the 0 sub-brick of dataset 'mset' as a mask
                    to indicate which voxels to analyze (a sub-brick
                    selector is allowed) [default = use all voxels]

++ Compile date = Mar 13 2009




AFNI program: 3dproject
Projection along cardinal axes from a 3D dataset
Usage: 3dproject [editing options]
        [-sum|-max|-amax|-smax] [-output root] [-nsize] [-mirror]
        [-RL {all | x1 x2}] [-AP {all | y1 y2}] [-IS {all | z1 z2}]
        [-ALL] dataset

Program to produce orthogonal projections from a 3D dataset.
  -sum     ==> Add the dataset voxels along the projection direction
  -max     ==> Take the maximum of the voxels [the default is -sum]
  -amax    ==> Take the absolute maximum of the voxels
  -smax    ==> Take the signed maximum of the voxels; for example,
                -max  ==> -7 and 2 go to  2 as the projected value
                -amax ==> -7 and 2 go to  7 as the projected value
                -smax ==> -7 and 2 go to -7 as the projected value
  -first x ==> Take the first value greater than x
  -nsize   ==> Scale the output images up to 'normal' sizes
               (e.g., 64x64, 128x128, or 256x256)
               This option only applies to byte or short datasets.
  -mirror  ==> The radiologists' and AFNI convention is to display
               axial and coronal images with the subject's left on
               the right of the image; the use of this option will
               mirror the axial and coronal projections so that
               left is left and right is right.

  -output root ==> Output projections will named
                   root.sag, root.cor, and root.axi
                   [the default root is 'proj']

  -RL all      ==> Project in the Right-to-Left direction along
                   all the data (produces root.sag)
  -RL x1 x2    ==> Project in the Right-to-Left direction from
                   x-coordinate x1 to x2 (mm)
                   [negative x is Right, positive x is Left]
                   [OR, you may use something like -RL 10R 20L
                        to project from x=-10 mm to x=+20 mm  ]

  -AP all      ==> Project in the Anterior-to-Posterior direction along
                   all the data (produces root.cor)
  -AP y1 y2    ==> Project in the Anterior-to-Posterior direction from
                   y-coordinate y1 to y2 (mm)
                   [negative y is Anterior, positive y is Posterior]
                   [OR, you may use something like -AP 10A 20P
                        to project from y=-10 mm to y=+20 mm  ]

  -IS all      ==> Project in the Inferior-to-Superior direction along
                   all the data (produces root.axi)
  -IS y1 y2    ==> Project in the Inferior-to-Superior direction from
                   z-coordinate z1 to z2 (mm)
                   [negative z is Inferior, positive z is Superior]
                   [OR, you may use something like -IS 10I 20S
                        to project from z=-10 mm to z=+20 mm  ]

  -ALL         ==> Equivalent to '-RL all -AP all -IS all'

* NOTE that a projection direction will not be used if the bounds aren't
   given for that direction; thus, at least one of -RL, -AP, or -IS must
   be used, or nothing will be computed!
* NOTE that in the directions transverse to the projection direction,
   all the data is used; that is, '-RL -5 5' will produce a full sagittal
   image summed over a 10 mm slice, irrespective of the -IS or -AP extents.
* NOTE that the [editing options] are the same as in 3dmerge.
   In particular, the '-1thtoin' option can be used to project the
   threshold data (if available).

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3drefit
Changes some of the information inside a 3D dataset's header.
Note that this program does NOT change the .BRIK file at all;
the main purpose of 3drefit is to fix up errors made when
using to3d.
To see the current values stored in a .HEAD file, use the command
'3dinfo dataset'.  Using 3dinfo both before and after 3drefit is
a good idea to make sure the changes have been made correctly!

20 Jun 2006: 3drefit will now work on NIfTI datasets (but it will
             write out the entire dataset)

Usage: 3drefit [options] dataset ...
where the options are
  -orient code    Sets the orientation of the 3D volume(s) in the .BRIK.
                  The code must be 3 letters, one each from the
                  pairs {R,L} {A,P} {I,S}.  The first letter gives
                  the orientation of the x-axis, the second the
                  orientation of the y-axis, the third the z-axis:
                     R = right-to-left         L = left-to-right
                     A = anterior-to-posterior P = posterior-to-anterior
                     I = inferior-to-superior  S = superior-to-inferior
               ** WARNING: when changing the orientation, you must be sure
                  to check the origins as well, to make sure that the volume
                  is positioned correctly in space.

  -xorigin distx  Puts the center of the edge voxel off at the given
  -yorigin disty  distance, for the given axis (x,y,z); distances in mm.
  -zorigin distz  (x=first axis, y=second axis, z=third axis).
                  Usually, only -zorigin makes sense.  Note that this
                  distance is in the direction given by the corresponding
                  letter in the -orient code.  For example, '-orient RAI'
                  would mean that '-zorigin 30' sets the center of the
                  first slice at 30 mm Inferior.  See the to3d manual
                  for more explanations of axes origins.
               ** SPECIAL CASE: you can use the string 'cen' in place of
                  a distance to force that axis to be re-centered.

  -xorigin_raw xx Puts the center of the edge voxel at the given COORDINATE
  -yorigin_raw yy rather than the given DISTANCE.  That is, these values
  -zorigin_raw zz directly replace the offsets in the dataset header,
                  without any possible sign changes.

  -duporigin cset Copies the xorigin, yorigin, and zorigin values from
                  the header of dataset 'cset'.

  -dxorigin dx    Adds distance 'dx' (or 'dy', or 'dz') to the center
  -dyorigin dy    coordinate of the edge voxel.  Can be used with the
  -dzorigin dz    values input to the 'Nudge xyz' plugin.
               ** WARNING: you can't use these options at the same
                  time you use -orient.
               ** WARNING: consider -shift_tags if dataset has tags

  -xdel dimx      Makes the size of the voxel the given dimension,
  -ydel dimy      for the given axis (x,y,z); dimensions in mm.
  -zdel dimz   ** WARNING: if you change a voxel dimension, you will
                  probably have to change the origin as well.
  -keepcen        When changing a voxel dimension with -xdel (etc.),
                  also change the corresponding origin to keep the
                  center of the dataset at the same coordinate location.
  -xyzscale fac   Scale the size of the dataset voxels by the factor 'fac'.
                  This is equivalent to using -xdel, -ydel, -zdel together.
                  -keepcen is used on the first input dataset, and then
                  any others will be shifted the same amount, to maintain
                  their alignment with the first one.
               ** WARNING: -xyzscale can't be used with any of the other
                  options that change the dataset grid coordinates!
               ** N.B.: 'fac' must be positive, and using fac=1.0 is stupid.

  -TR time        Changes the TR time to a new value (see 'to3d -help').
  -notoff         Removes the slice-dependent time-offsets.
  -Torg ttt       Set the time origin of the dataset to value 'ttt'.
                  (Time origins are set to 0 in to3d.)
               ** WARNING: These 3 options apply only to 3D+time datasets.
                   **N.B.: Using '-TR' on a dataset without a time axis
                           will add a time axis to the dataset.

  -newid          Changes the ID code of this dataset as well.

  -nowarp         Removes all warping information from dataset.

  -apar aset      Set the dataset's anatomy parent dataset to 'aset'
               ** N.B.: The anatomy parent is the dataset from which the
                  transformation from +orig to +acpc and +tlrc coordinates
                  is taken.  It is appropriate to use -apar when there is
                  more than 1 anatomical dataset in a directory that has
                  been transformed.  In this way, you can be sure that
                  AFNI will choose the correct transformation.  You would
                  use this option on all the +orig dataset that are
                  aligned with 'aset' (i.e., that were acquired in the
                  same scanning session).
               ** N.B.: Special cases of 'aset'
                   aset = NULL --> remove the anat parent info from the dataset
                   aset = SELF --> set the anat parent to be the dataset itself

  -wpar wset      Set the warp parent (the +orig version of a +tlrc dset).
                  This option is used by @auto_tlrc. Do not use it unless
                  you know what you're doing. 

  -clear_bstat    Clears the statistics (min and max) stored for each sub-brick
                  in the dataset.  This is useful if you have done something to
                  modify the contents of the .BRIK file associated with this
                  dataset.
  -redo_bstat     Re-computes the statistics for each sub-brick.  Requires
                  reading the .BRIK file, of course.  Also does -clear_bstat
                  before recomputing statistics, so that if the .BRIK read
                  fails for some reason, then you'll be left without stats.

  -statpar v ...  Changes the statistical parameters stored in this
                  dataset.  See 'to3d -help' for more details.

  -markers        Adds an empty set of AC-PC markers to the dataset,
                  if it can handle them (is anatomical, is in the +orig
                  view, and isn't 3D+time).
               ** WARNING: this will erase any markers that already exist!

  -shift_tags     Apply -dxorigin (and y and z) changes to tags.

  -dxtag dx       Add dx to the coordinates of all tags.
  -dytag dy       Add dy to the coordinates of all tags.
  -dztag dz       Add dz to the coordinates of all tags.

  -view code      Changes the 'view' to be 'code', where the string 'code'
                  is one of 'orig', 'acpc', or 'tlrc'.
               ** WARNING: The program will also change the .HEAD and .BRIK
                  filenames to match.  If the dataset filenames already
                  exist in the '+code' view, then this option will fail.
                  You will have to rename the dataset files before trying
                  to use '-view'.  If you COPY the files and then use
                  '-view', don't forget to use '-newid' as well!

  -label2 llll    Set the 'label2' field in a dataset .HEAD file to the
                  string 'llll'.  (Can be used as in AFNI window titlebars.)

  -denote         Means to remove all possibly-identifying notes from
                  the header.  This includes the History Note, other text
                  Notes, keywords, and labels.

  -deoblique      Replace transformation matrix in header with cardinal matrix.
                  This option DOES NOT deoblique the volume. To do so
                  you should use 3dWarp -deoblique. This option is not 
                  to be used unless you really know what you're doing.

  -oblique_origin
                  assume origin and orientation from oblique transformation
                  matrix rather than traditional cardinal information

  -byteorder bbb  Sets the byte order string in the header.
                  Allowable values for 'bbb' are:
                     LSB_FIRST   MSB_FIRST   NATIVE_ORDER
                  Note that this does not change the .BRIK file!
                  This is done by programs 2swap and 4swap.

  -appkey ll      Appends the string 'll' to the keyword list for the
                  whole dataset.
  -repkey ll      Replaces the keyword list for the dataset with the
                  string 'll'.
  -empkey         Destroys the keyword list for the dataset.

  -atrcopy dd nn  Copy AFNI header attribute named 'nn' from dataset 'dd'
                  into the header of the dataset(s) being modified.
                  For more information on AFNI header attributes, see
                  documentation file README.attributes. More than one
                  '-atrcopy' option can be used.
          **N.B.: This option is for those who know what they are doing!
                  Without the -saveatr option, this option is
                  meant to be used to alter attributes that are NOT
                  directly mapped into dataset internal structures, since
                  those structures are mapped back into attribute values
                  as the dataset is being written to disk.  If you want
                  to change such an attribute, you have to use the
                  corresponding 3drefit option directly or use the 
                  -saveatr option.

                  If you are confused, try to understand this: 
                  Option -atrcopy was never intended to modify AFNI-
                  specific attributes. Rather, it was meant to copy
                  user-specific attributes that had been added to some
                  dataset using -atrstring option. A cursed day came when
                  it was convenient to use -atrcopy to copy an AFNI-specific
                  attribute (BRICK_LABS to be exact) and for that to
                  take effect in the output, the option -saveatr was added.
                  Contact Daniel Glen and/or Rick Reynolds for further 
                  clarification and any other needs you may have.

                  Do NOT use -atrcopy or -atrstring with other modification
                  options.

  -atrstring n 'x' Copy the string 'x' into the dataset(s) being
                   modified, giving it the attribute name 'n'.
                   To be safe, the 'x' string should be in quotes.
          **N.B.: You can store attributes with almost any name in
                  the .HEAD file.  AFNI will ignore those it doesn't
                  know anything about.  This technique can be a way of
                  communicating information between programs.  However,
                  when most AFNI programs write a new dataset, they will
                  not preserve any such non-standard attributes.
  -atrfloat name 'values'
  -atrint name 'values'
                  Create or modify floating point or integer attributes.
                  The input values may be specified as a single string
                  in quotes or as a 1D filename or string. For example,
     3drefit -atrfloat IJK_TO_DICOM_REAL '1 0 0 0 0 1 0 0 0 0 0 1' dset+orig
     3drefit -atrfloat IJK_TO_DICOM_REAL flipZ.1D dset+orig
     3drefit -atrfloat IJK_TO_DICOM_REAL '1D:1,3@0,0,1,2@0,2@0,1,0' dset+orig
                  Almost all afni attributes can be modified in this way
  -saveatr        (default) Copy the attributes that are known to AFNI into 
                  the dset->dblk structure thereby forcing changes to known
                  attributes to be present in the output.
                  This option only makes sense with -atrcopy
          **N.B.: Don't do something like copy labels of a dataset with 
                  30 sub-bricks to one that has only 10, or vice versa.
                  This option is for those who would deservedly earn a
                  hunting license.
  -nosaveatr      Opposite of -saveatr
     Example: 
     3drefit -saveatr -atrcopy WithLabels+tlrc BRICK_LABS NeedsLabels+tlrc

  -'type'         Changes the type of data that is declared for this
                  dataset, where 'type' is chosen from the following:
       ANATOMICAL TYPES
         spgr == Spoiled GRASS             fse == Fast Spin Echo  
         epan == Echo Planar              anat == MRI Anatomy     
           ct == CT Scan                  spct == SPECT Anatomy   
          pet == PET Anatomy               mra == MR Angiography  
         bmap == B-field Map              diff == Diffusion Map   
         omri == Other MRI                abuc == Anat Bucket     
       FUNCTIONAL TYPES
          fim == Intensity                fith == Inten+Thr       
         fico == Inten+Cor                fitt == Inten+Ttest     
         fift == Inten+Ftest              fizt == Inten+Ztest     
         fict == Inten+ChiSq              fibt == Inten+Beta      
         fibn == Inten+Binom              figt == Inten+Gamma     
         fipt == Inten+Poisson            fbuc == Func-Bucket     
  -copyaux auxset Copies the 'auxiliary' data from dataset 'auxset'
                  over the auxiliary data for the dataset being
                  modified.  Auxiliary data comprises sub-brick labels,
                  keywords, and statistics codes.
                  '-copyaux' occurs BEFORE the '-sub' operations below,
                  so you can use those to alter the auxiliary data
                  that is copied from auxset.

The options below allow you to attach auxiliary data to sub-bricks
in the dataset.  Each option may be used more than once so that
multiple sub-bricks can be modified in a single run of 3drefit.

  -sublabel  n ll  Attach to sub-brick #n the label string 'll'.
  -subappkey n ll  Add to sub-brick #n the keyword string 'll'.
  -subrepkey n ll  Replace sub-brick #n's keyword string with 'll'.
  -subempkey n     Empty out sub-brick #n' keyword string

  -substatpar n type v ...
                  Attach to sub-brick #n the statistical type and
                  the auxiliary parameters given by values 'v ...',
                  where 'type' is one of the following:
         type  Description  PARAMETERS
         ----  -----------  ----------------------------------------
         fico  Cor          SAMPLES  FIT-PARAMETERS  ORT-PARAMETERS
         fitt  Ttest        DEGREES-of-FREEDOM
         fift  Ftest        NUMERATOR and DENOMINATOR DEGREES-of-FREEDOM
         fizt  Ztest        N/A
         fict  ChiSq        DEGREES-of-FREEDOM
         fibt  Beta         A (numerator) and B (denominator)
         fibn  Binom        NUMBER-of-TRIALS and PROBABILITY-per-TRIAL
         figt  Gamma        SHAPE and SCALE
         fipt  Poisson      MEAN

You can also use option '-unSTAT' to remove all statistical encodings
from sub-bricks in the dataset.  This operation would be desirable if
you modified the values in the dataset (e.g., via 3dcalc).
 ['-unSTAT' is done BEFORE the '-substatpar' operations, so you can  ]
 [combine these options to completely redo the sub-bricks, if needed.]
 [Option '-unSTAT' also implies that '-unFDR' will be carried out.   ]

The following options allow you to modify VOLREG fields:
  -vr_mat val1 ... val12  Use these twelve values for VOLREG_MATVEC_index.
  -vr_mat_ind index       Index of VOLREG_MATVEC_index field to be modified.
                          Optional, default index is 0.
NB: You can only modify one VOLREG_MATVEC_index at a time
  -vr_center_old x y z    Use these 3 values for VOLREG_CENTER_OLD.
  -vr_center_base x y z   Use these 3 values for VOLREG_CENTER_BASE.


The following options let you modify the FDR curves stored in the header:
 -addFDR = For each sub-brick marked with a statistical code, (re)compute
           the FDR curve of z(q) vs. statistic, and store in the dataset header
           * Since 3drefit doesn't have a '-mask' option, you will have to mask
             statistical sub-bricks yourself via 3dcalc (if desired):
              3dcalc -a stat+orig -b mask+orig -expr 'a*step(b)' -prefix statmm
           * '-addFDR' runs as if '-new -pmask' were given to 3dFDR, so that
              stat values == 0 will be ignored in the FDR algorithm.

 -unFDR  = Remove all FDR curves from the header
           [you will want to do this if you have done something to ]
           [modify the values in the dataset statistical sub-bricks]

++ Last program update: 23 Jan 2008

++ Compile date = Mar 13 2009




AFNI program: 3drename
Usage 1: 3drename old_prefix new_prefix
  Will rename all datasets using the old_prefix to use the new_prefix;
    3drename fred ethel
  will change fred+orig.HEAD    to ethel+orig.HEAD
              fred+orig.BRIK    to ethel+orig.BRIK
              fred+tlrc.HEAD    to ethel+tlrc.HEAD
              fred+tlrc.BRIK.gz to ethel+tlrc.BRIK.gz

Usage 2: 3drename old_prefix+view new_prefix
  Will rename only the dataset with the given view (orig, acpc, tlrc).

++ Compile date = Mar 13 2009




AFNI program: 3dresample

3dresample - reorient and/or resample a dataset

    This program can be used to change the orientation of a
    dataset (via the -orient option), or the dx,dy,dz
    grid spacing (via the -dxyz option), or change them
    both to match that of a master dataset (via the -master
    option).

    Note: if both -master and -dxyz are used, the dxyz values
          will override those from the master dataset.

 ** It is important to note that once a dataset of a certain
    grid is created (i.e. orientation, dxyz, field of view),
    if other datasets are going to be resampled to match that
    first one, then using -master should be used, instead of
    -dxyz.  That will guarantee that all grids match.

    Otherwise, even using both -orient and -dxyz, one may not
    be sure that the fields of view will identical, for example.

 ** Warning: this program is not meant to transform datasets
             between view types (such as '+orig' and '+tlrc').

             For that purpose, please see '3dfractionize -help'.

------------------------------------------------------------

  usage: 3dresample [options] -prefix OUT_DSET -inset IN_DSET

  examples:

    3dresample -orient asl -rmode NN -prefix asl.dset -inset in+orig
    3dresample -dxyz 1.0 1.0 0.9 -prefix 119.dset -inset in+tlrc
    3dresample -master master+orig -prefix new.dset -inset old+orig

  note:

    Information about a dataset's voxel size and orientation
    can be found in the output of program 3dinfo

------------------------------------------------------------

  options: 

    -help            : show this help information

    -hist            : output the history of program changes

    -debug LEVEL     : print debug info along the way
          e.g.  -debug 1
          default level is 0, max is 2

    -version         : show version information

    -dxyz DX DY DZ   : resample to new dx, dy and dz
          e.g.  -dxyz 1.0 1.0 0.9
          default is to leave unchanged

          Each of DX,DY,DZ must be a positive real number,
          and will be used for a voxel delta in the new
          dataset (according to any new orientation).

    -orient OR_CODE  : reorient to new axis order.
          e.g.  -orient asl
          default is to leave unchanged

          The orientation code is a 3 character string,
          where the characters come from the respective
          sets {A,P}, {I,S}, {L,R}.

          For example OR_CODE = LPI is the standard
          'neuroscience' orientation, where the x-axis is
          Left-to-Right, the y-axis is Posterior-to-Anterior,
          and the z-axis is Inferior-to-Superior.

    -rmode RESAM     : use this resampling method
          e.g.  -rmode Linear
          default is NN (nearest neighbor)

          The resampling method string RESAM should come
          from the set {'NN', 'Li', 'Cu', 'Bk'}.  These
          are for 'Nearest Neighbor', 'Linear', 'Cubic'
          and 'Blocky' interpolation, respectively.
          See 'Anat resam mode' under the 'Define Markers'
          window in afni.

    -master MAST_DSET: align dataset grid to that of MAST_DSET
          e.g.  -master master.dset+orig

          Get dxyz and orient from a master dataset.  The
          resulting grid will match that of the master.  This
          option can be used with -dxyz, but not with -orient.

    -prefix OUT_DSET : required prefix for output dataset
          e.g.  -prefix reori.asl.pickle

    -inset IN_DSET   : required input dataset to reorient
          e.g.  -inset old.dset+orig

------------------------------------------------------------

  Author: R. Reynolds - Version 1.8 




AFNI program: 3dretroicor
Usage: 3dretroicor [options] dataset

Performs Retrospective Image Correction for physiological
motion effects, using a slightly modified version of the
RETROICOR algorithm described in:

  Glover, G. H., Li, T., & Ress, D. (2000). Image-based method
for retrospective correction of physiological motion effects in
fMRI: RETROICOR. Magnetic Resonance in Medicine, 44, 162-167.

Options (defaults in []'s):

 -ignore    = The number of initial timepoints to ignore in the
              input (These points will be passed through
              uncorrected) [0]
 -prefix    = Prefix for new, corrected dataset [retroicor]

 -card      = 1D cardiac data file for cardiac correction
 -cardphase = Filename for 1D cardiac phase output
 -threshold = Threshold for detection of R-wave peaks in input
              (Make sure it's above the background noise level;
              Try 3/4 or 4/5 times range plus minimum) [1]

 -resp      = 1D respiratory waveform data for correction
 -respphase = Filename for 1D resp phase output

 -order     = The order of the correction (2 is typical;
              higher-order terms yield little improvement
              according to Glover et al.) [2]

 -help      = Display this message and stop (must be first arg)

Dataset: 3D+time dataset to process

** The input dataset and at least one of -card and -resp are
    required.

NOTES
-----

The durations of the physiological inputs are assumed to equal
the duration of the dataset. Any constant sampling rate may be
used, but 40 Hz seems to be acceptable. This program's cardiac
peak detection algorithm is rather simplistic, so you might try
using the scanner's cardiac gating output (transform it to a
spike wave if necessary).

This program uses slice timing information embedded in the
dataset to estimate the proper cardiac/respiratory phase for
each slice. It makes sense to run this program before any
program that may destroy the slice timings (e.g. 3dvolreg for
motion correction).

Author -- Fred Tam, August 2002

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.



AFNI program: 3drotate
Usage: 3drotate [options] dataset
Rotates and/or translates all bricks from an AFNI dataset.
'dataset' may contain a sub-brick selector list.

GENERIC OPTIONS:
  -prefix fname    = Sets the output dataset prefix name to be 'fname'
  -verbose         = Prints out progress reports (to stderr)

OPTIONS TO SPECIFY THE ROTATION/TRANSLATION:
-------------------------------------------
*** METHOD 1 = direct specification:
At most one of these shift options can be used:
  -ashift dx dy dz = Shifts the dataset 'dx' mm in the x-direction, etc.,
                       AFTER rotation.
  -bshift dx dy dz = Shifts the dataset 'dx' mm in the x-direction, etc.,
                       BEFORE rotation.
    The shift distances by default are along the (x,y,z) axes of the dataset
    storage directions (see the output of '3dinfo dataset').  To specify them
    anatomically, you can suffix a distance with one of the symbols
    'R', 'L', 'A', 'P', 'I', and 'S', meaning 'Right', 'Left', 'Anterior',
    'Posterior', 'Inferior', and 'Superior', respectively.

  -rotate th1 th2 th3
    Specifies the 3D rotation to be composed of 3 planar rotations:
       1) 'th1' degrees about the 1st axis,           followed by
       2) 'th2' degrees about the (rotated) 2nd axis, followed by
       3) 'th3' degrees about the (doubly rotated) 3rd axis.
    Which axes are used for these rotations is specified by placing
    one of the symbols 'R', 'L', 'A', 'P', 'I', and 'S' at the end
    of each angle (e.g., '10.7A').  These symbols denote rotation
    about the 'Right-to-Left', 'Left-to-Right', 'Anterior-to-Posterior',
    'Posterior-to-Anterior', 'Inferior-to-Superior', and
    'Superior-to-Inferior' axes, respectively.  A positive rotation is
    defined by the right-hand rule.

*** METHOD 2 = copy from output of 3dvolreg:
  -rotparent rset
    Specifies that the rotation and translation should be taken from the
    first 3dvolreg transformation found in the header of dataset 'rset'.
  -gridparent gset
    Specifies that the output dataset of 3drotate should be shifted to
    match the grid of dataset 'gset'.  Can only be used with -rotparent.
    This dataset should be one this is properly aligned with 'rset' when
    overlaid in AFNI.
  * If -rotparent is used, then don't use -matvec, -rotate, or -[ab]shift.
  * If 'gset' has a different number of slices than the input dataset,
    then the output dataset will be zero-padded in the slice direction
    to match 'gset'.
  * These options are intended to be used to align datasets between sessions:
     S1 = SPGR from session 1    E1 = EPI from session 1
     S2 = SPGR from session 2    E2 = EPI from session 2
 3dvolreg -twopass -twodup -base S1+orig -prefix S2reg S2+orig
 3drotate -rotparent S2reg+orig -gridparent E1+orig -prefix E2reg E2+orig
     The result will have E2reg rotated from E2 in the same way that S2reg
     was from S2, and also shifted/padded (as needed) to overlap with E1.

*** METHOD 3 = give the transformation matrix/vector directly:
  -matvec_dicom mfile
  -matvec_order mfile
    Specifies that the rotation and translation should be read from file
    'mfile', which should be in the format
           u11 u12 u13 v1
           u21 u22 u23 v2
           u31 u32 u33 u3
    where each 'uij' and 'vi' is a number.  The 3x3 matrix [uij] is the
    orthogonal matrix of the rotation, and the 3-vector [vi] is the -ashift
    vector of the translation.

*** METHOD 4 = copy the transformation from 3dTagalign:
  -matvec_dset mset
    Specifies that the rotation and translation should be read from
    the .HEAD file of dataset 'mset', which was created by program
    3dTagalign.
  * If -matvec_dicom is used, the matrix and vector are given in Dicom
     coordinate order (+x=L, +y=P, +z=S).  This is the option to use
     if mfile is generated using 3dTagalign -matvec mfile.
  * If -matvec_order is used, the the matrix and vector are given in the
     coordinate order of the dataset axes, whatever they may be.
  * You can't mix -matvec_* options with -rotate and -*shift.

*** METHOD 5 = input rotation+shift parameters from an ASCII file:
  -dfile dname  *OR*  -1Dfile dname
    With these methods, the movement parameters for each sub-brick
    of the input dataset are read from the file 'dname'.  This file
    should consist of columns of numbers in ASCII format.  Six (6)
    numbers are read from each line of the input file.  If the
    '-dfile' option is used, each line of the input should be at
    least 7 numbers, and be of the form
      ignored roll pitch yaw dS dL dP
    If the '-1Dfile' option is used, then each line of the input
    should be at least 6 numbers, and be of the form
      roll pitch yaw dS dL dP
          (These are the forms output by the '-dfile' and
           '-1Dfile' options of program 3dvolreg; see that
           program's -help output for the hideous details.)
    The n-th sub-brick of the input dataset will be transformed
    using the parameters from the n-th line of the dname file.
    If the dname file doesn't contain as many lines as the
    input dataset has sub-bricks, then the last dname line will
    be used for all subsequent sub-bricks.  Excess columns or
    rows will be ignored.
  N.B.: Rotation is always about the center of the volume.
          If the parameters are derived from a 3dvolreg run
          on a dataset with a different center in xyz-space,
          the results may not be what you want!
  N.B.: You can't use -dfile/-1Dfile with -points (infra).

POINTS OPTIONS (instead of datasets):
------------------------------------
 -points
 -origin xo yo zo
   These options specify that instead of rotating a dataset, you will
   be rotating a set of (x,y,z) points.  The points are read from stdin.
   * If -origin is given, the point (xo,yo,zo) is used as the center for
     the rotation.
   * If -origin is NOT given, and a dataset is given at the end of the
     command line, then the center of the dataset brick is used as
     (xo,yo,zo).  The dataset will NOT be rotated if -points is given.
   * If -origin is NOT given, and NO dataset is given at the end of the
     command line, then xo=yo=zo=0 is assumed.  You probably don't
     want this.
   * (x,y,z) points are read from stdin as 3 ASCII-formatted numbers per
     line, as in 3dUndump.  Any succeeding numbers on input lines will
     be copied to the output, which will be written to stdout.
   * The input (x,y,z) coordinates are taken in the same order as the
     axes of the input dataset.  If there is no input dataset, then
       negative x = R  positive x = L  }
       negative y = A  positive y = P  } e.g., the DICOM order
       negative z = I  positive z = S  }
     One way to dump some (x,y,z) coordinates from a dataset is:

      3dmaskdump -mask something+tlrc -o xyzfilename -noijk
                 '3dcalc( -a dset+tlrc -expr x -datum float )'
                 '3dcalc( -a dset+tlrc -expr y -datum float )'
                 '3dcalc( -a dset+tlrc -expr z -datum float )'

     (All of this should be on one command line.)
============================================================================

Example: 3drotate -prefix Elvis -bshift 10S 0 0 -rotate 30R 0 0 Sinatra+orig

This will shift the input 10 mm in the superior direction, followed by a 30
degree rotation about the Right-to-Left axis (i.e., nod the head forward).

============================================================================
Algorithm: The rotation+shift is decomposed into 4 1D shearing operations
           (a 3D generalization of Paeth's algorithm).  The interpolation
           (i.e., resampling) method used for these shears can be controlled
           by the following options:

 -Fourier = Use a Fourier method (the default: most accurate; slowest).
 -NN      = Use the nearest neighbor method.
 -linear  = Use linear (1st order polynomial) interpolation (least accurate).
 -cubic   = Use the cubic (3rd order) Lagrange polynomial method.
 -quintic = Use the quintic (5th order) Lagrange polynomial method.
 -heptic  = Use the heptic (7th order) Lagrange polynomial method.

 -Fourier_nopad = Use the Fourier method WITHOUT padding
                * If you don't mind - or even want - the wraparound effect
                * Works best if dataset grid size is a power of 2, possibly
                  times powers of 3 and 5, in all directions being altered.
                * The main use would seem to be to un-wraparound poorly
                  reconstructed images, by using a shift; for example:
                   3drotate -ashift 30A 0 0 -Fourier_nopad -prefix Anew A+orig
                * This option is also available in the Nudge Dataset plugin.

 -clipit  = Clip results to input brick range [now the default].
 -noclip  = Don't clip results to input brick range.

 -zpad n  = Zeropad around the edges by 'n' voxels during rotations
              (these edge values will be stripped off in the output)
        N.B.: Unlike to3d, in this program '-zpad' adds zeros in
               all directions.
        N.B.: The environment variable AFNI_ROTA_ZPAD can be used
               to set a nonzero default value for this parameter.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dsvm

Program: 3dsvm
Authors: Jeffery Prescott and Stephen LaConte

+++++++++++++++ 3dsvm: support vector machine analysis of brain data  +++++++++++++++

3dsvm - temporally predictive modeling with the support vector machine

   This program provides the ability to perform support vector machine
   (SVM) learning on AFNI datasets using the SVM-light package (version 5)
   developed by Thorsten Joachims (http://svmlight.joachims.org/).

-----------------------------------------------------------------------------
Usage:
------
	 3dsvm [options] 

Examples:
---------
1. Training: basic options require a training run, category (class) labels 
   for each timepoint, and an output model. In general, it usually makes 
   sense to include a mask file to exclude at least non-brain voxels

	 3dsvm -trainvol run1+orig \ 
	       -trainlabels run1_categories.1D \ 
	       -mask mask+orig \ 
	       -model model_run1

2. Training: obtain model alphas (a_run1.1D) and 
   model weights (fim: run1_fim+orig)

	 3dsvm -alpha a_run1 \
	       -trainvol run1+orig \ 
	       -trainlabels run1_categories.1D \ 
	       -mask mask+orig \ 
	       -model model_run1
	       -bucket run1_fim

3. Training: exclude some time points using a censor file 

	 3dsvm -alpha a_run1 \
	       -trainvol run1+orig \ 
	       -trainlabels run1_categories.1D \ 
	       -censor censor.1D \ 
	       -mask mask+orig \ 
	       -model model_run1
	       -bucket run1_fim

4. Training: control svm model complexity (C value)

	 3dsvm -c 100.0 \
	       -alpha a_run1 \
	       -trainvol run1+orig \ 
	       -trainlabels run1_categories.1D \ 
	       -censor censor.1D \ 
	       -mask mask+orig \ 
	       -model model_run1
	       -bucket run1_fim

5. Testing: basic options require a testing run, a model, and an output
   predictions file

	 3dsvm -testvol run2+orig \
	       -model model_run1+orig \
	       -predictions pred2_model1

6. Testing: compare predictions with 'truth' 

	 3dsvm -testvol run2+orig \
	       -model model_run1+orig \
	       -testlabels run2_categories.1D \
	       -predictions pred2_model1

7. Testing: use -classout to output integer thresholded class predictions
   (rather than continuous valued output)

	 3dsvm -classout \
	       -testvol run2+orig \
	       -model model_run1+orig \
	       -testlabels run2_categories.1D \
	       -predictions pred2_model1


options:
--------

------------------- TRAINING OPTIONS -------------------------------------------
-trainvol trnname      A 3D+t AFNI brik dataset to be used for training. 

-trainlabels lname     lname = filename of class category .1D labels 
                       corresponding to the stimulus paradigm for the 
                       training data set. The number of labels in the 
                       selected file must be equal to the number of 
                       time points in the training dataset. The labels
                       must be arranged in a single column, and they can
                       be any of the following values: 

                              0    - class 0
                              1    - class 1
                              n    - class n (where n is a positive integer)
                              9999 - censor this point 

                       It is recommended to use a continuous set of class
                       labels, starting at 0. See also -censor.

-censor cname          Specify a .1D censor file that allows the user
                       to ignore certain samples in the training data.
                       To ignore a specific sample, put a 0 in the
                       row corresponding to the time sample - i.e., to
                       ignore sample t, place a 0 in row t of the file.
                       All samples that are to be included for training
                       must have a 1 in the corresponding row. If no
                       censor file is specified, all samples will be used 
                       for training. Note the lname file specified by
                       trainlabels can also be used to censor time points
                       (see -trainlabels).

-a aname               Write the alpha file generated by SVM-Light to
                       aname.1D 
-alpha aname           Same as -a option above. 

-wout                  Flag to output sum of weighted linear support 
                       vectors to the bucket file. This is one means of
                       generating an "activation map" from linear kernel
                       SVMs see (LaConte et al., 2005). NOTE: this is 
                       currently not required since it is the only output
                       option.

-bucket bprefix        Currently only outputs the sum of weighted linear 
                       support vectors written out to a functional (fim) 
                       brik file. This is one means of generating an 
                       "activation map" from linear kernel SVMS 
                       (see LaConte et al, 2005). 

-mask mname            mname must be is a byte-format brik file used to
                       mask voxels in the analysis. For example, a mask
                       of the whole brain can be generated by using 
                       3dAutomask, or more specific ROIs could be generated
                       with the Draw Dataset plugin or converted from a 
                       thresholded functional dataset. The mask is specified
                       during training but is also considered part of the 
                       model output and is automatically applied to test 
                       data. 

-nomodelmask           Flag to enable the ommission of a mask file. If this
                       option is used for training, it must also be used 
                       for testing. 

------------------- TRAINING AND TESTING MUST SPECIFY MODNAME ------------------
-model modname         modname = basename for the output model brik and any
                       axillary files during training. For testing, modname
                       is used to specify the model brik. As in the
                       examples above: 

                           3dsvm -trainvol run1+orig \ 
                                 -trainlabels run1_categories.1D \ 
                                 -mask mask+orig \ 
                                 -model model_run1

                           3dsvm -testvol run2+orig \ 
                                 -model model_run1+orig  \ 
                                 -predictions pred2_model1

------------------- TESTING OPTIONS --------------------------------------------
-testvol tstname       A 3D or 3D+t AFNI brik dataset to be used for testing. 
                       A major assumption is that the training and testing  
                       volumes are aligned, and that voxels are of same number, 
                       volume, etc. 

-predictions pname     pname = basename for .1D files output for a test
                       dataset. These files consist of single columns of
                       value results for each training data timepoint. A
                       seperate file is generated for each possible pair of
                       training classes. If more than two class categories
                       were specified, an "overall" file is also generated.
                       By default, the prediction values take on a continuous
                       range; to output inter-valued class decision values, 
                       use the -classout flag. 

-classout              Flag to specify that pname files should be integer-
                       valued, corresponding to class category decisions.

-nodetrend             Flag to specify that pname files should not be 
                       linearly de-trended (detrend is the current default).

-testlabels tlname     tlname = filename of 'true' class category .1D labels 
                       for the test dataset. It is used to calculate the 
                       prediction accuracy performance of SVM classification. 
                       If this option is not specified, then performance 
                       calculations are not made. Format is the same as 
                       lname specified for -trainlabels. 

-multiclass mctype     mctype specifies the multiclass algorithm for classification
                       current implementations use 1-vs-1 two-class SVM models

                       mctype must be one of the following: 

                             DAG     [Default]:  Directed Acyclic Graph
                             vote             :  Max Wins from votes of all 1-vs-1 models

                       see http:\\cpu.bcm.edu\laconte\3dsvm for details and references.

------------------- INFORMATION OPTIONS --------------------------------------------
-help                  this help

-change_summary        describes chages of note and rough dates of their implementation





-------------------- SVM-light learn help -----------------------------

SVM-light V5.00: Support Vector Machine, learning module     30.06.02

Copyright: Thorsten Joachims, thorsten@ls8.cs.uni-dortmund.de

This software is available for non-commercial use only. It must not
be modified and distributed without prior permission of the author.
The author is not responsible for implications from the use of this
software.

   usage: svm_learn [options] example_file model_file

Arguments:
         example_file-> file with training data
         model_file  -> file to store learned decision rule in
General options:
         -?          -> this help
         -v [0..3]   -> verbosity level (default 1)
Learning options:
         -z {c,r,p}  -> select between classification (c), regression (r),
                        and preference ranking (p) (default classification)
         -c float    -> C: trade-off between training error
                        and margin (default [avg. x*x]^-1)
         -w [0..]    -> epsilon width of tube for regression
                        (default 0.1)
         -j float    -> Cost: cost-factor, by which training errors on
                        positive examples outweight errors on negative
                        examples (default 1) (see [4])
         -b [0,1]    -> use biased hyperplane (i.e. x*w+b>0) instead
                        of unbiased hyperplane (i.e. x*w>0) (default 1)
         -i [0,1]    -> remove inconsistent training examples
                        and retrain (default 0)
Performance estimation options:
         -x [0,1]    -> compute leave-one-out estimates (default 0)
                        (see [5])
         -o ]0..2]   -> value of rho for XiAlpha-estimator and for pruning
                        leave-one-out computation (default 1.0) (see [2])
         -k [0..100] -> search depth for extended XiAlpha-estimator 
                        (default 0)
Transduction options (see [3]):
         -p [0..1]   -> fraction of unlabeled examples to be classified
                        into the positive class (default is the ratio of
                        positive and negative examples in the training data)
Kernel options:
         -t int      -> type of kernel function:
                        0: linear (default)
                        1: polynomial (s a*b+c)^d
                        2: radial basis function exp(-gamma ||a-b||^2)
                        3: sigmoid tanh(s a*b + c)
                        4: user defined kernel from kernel.h
         -d int      -> parameter d in polynomial kernel
         -g float    -> parameter gamma in rbf kernel
         -s float    -> parameter s in sigmoid/poly kernel
         -r float    -> parameter c in sigmoid/poly kernel
         -u string   -> parameter of user defined kernel
Optimization options (see [1]):
         -q [2..]    -> maximum size of QP-subproblems (default 10)
         -n [2..q]   -> number of new variables entering the working set
                        in each iteration (default n = q). Set n size of cache for kernel evaluations in MB (default 40)
                        The larger the faster...
         -e float    -> eps: Allow that error for termination criterion
                        [y [w*x+b] - 1] >= eps (default 0.001)
         -h [5..]    -> number of iterations a variable needs to be
                        optimal before considered for shrinking (default 100)
         -f [0,1]    -> do final optimality check for variables removed
                        by shrinking. Although this test is usually 
                        positive, there is no guarantee that the optimum
                        was found if the test is omitted. (default 1)
Output options:
         -l string   -> file to write predicted labels of unlabeled
                        examples into after transductive learning
         -a string   -> write all alphas to this file after learning
                        (in the same order as in the training set)

More details in:
[1] T. Joachims, Making Large-Scale SVM Learning Practical. Advances in
    Kernel Methods - Support Vector Learning, B. Schölkopf and C. Burges and
    A. Smola (ed.), MIT Press, 1999.
[2] T. Joachims, Estimating the Generalization performance of an SVM
    Efficiently. International Conference on Machine Learning (ICML), 2000.
[3] T. Joachims, Transductive Inference for Text Classification using Support
    Vector Machines. International Conference on Machine Learning (ICML),
    1999.
[4] K. Morik, P. Brockhausen, and T. Joachims, Combining statistical learning
    with a knowledge-based approach - A case study in intensive care  
    monitoring. International Conference on Machine Learning (ICML), 1999.
[5] T. Joachims, Learning to Classify Text Using Support Vector
    Machines: Methods, Theory, and Algorithms. Dissertation, Kluwer,
    2002.



-------------------- SVM-light classify help -----------------------------

SVM-light V5.00: Support Vector Machine, classification module     30.06.02

Copyright: Thorsten Joachims, thorsten@ls8.cs.uni-dortmund.de

This software is available for non-commercial use only. It must not
be modified and distributed without prior permission of the author.
The author is not responsible for implications from the use of this
software.

   usage: svm_classify [options] example_file model_file output_file

options: -h         -> this help
         -v [0..3]  -> verbosity level (default 2)
         -f [0,1]   -> 0: old output format of V1.0
                    -> 1: output the value of decision function (default)



--------------------------------------------------------------------------
Jeff W. Prescott and Stephen M. LaConte 

Original version written by JP and SL, August 2006 
Released to general public, July 2007 

Questions/Comments/Bugs - email slaconte@cpu.bcm.edu 

Reference:
LaConte, S., Strother, S., Cherkassky, V. and Hu, X. 2005. Support vector
    machines for temporal classification of block design fMRI data. 
    NeuroImage, 26, 317-329.



AFNI program: 3dttest
Gosset (Student) t-test sets of 3D datasets
Usage 1: 3dttest [options] -set1 datasets ... -set2 datasets ...
   for comparing the means of 2 sets of datasets (voxel by voxel).

Usage 2: 3dttest [options] -base1 bval -set2 datasets ...
   for comparing the mean of 1 set of datasets against a constant.

   ** or use -base1_dset

OUTPUTS:
 A single dataset is created that is the voxel-by-voxel difference
 of the mean of set2 minus the mean of set1 (or minus 'bval').
 The output dataset will be of the intensity+Ttest ('fitt') type.
 The t-statistic at each voxel can be used as an interactive
 thresholding tool in AFNI.

t-TESTING OPTIONS:
  -set1 datasets ... = Specifies the collection of datasets to put into
                         the first set. The mean of set1 will be tested
                         with a 2-sample t-test against the mean of set2.
                   N.B.: -set1 and -base1 are mutually exclusive!
  -base1 bval        = 'bval' is a numerical value that the mean of set2
                         will be tested against with a 1-sample t-test.
  -base1_dset DSET   = Similar to -base1, but input a dataset where bval
                         can vary over voxels.
  -sdn1  sd n1       = If this option is given along with '-base1', then
                         'bval' is taken to have standard deviation 'sd'
                         computed from 'n1' samples.  In this case, each
                         voxel in set2 is compared to bval using a
                         pooled-variance unpaired 2-sample t-test.
                         [This is for Tom Johnstone; hope we meet someday.]
  -set2 datasets ... = Specifies the collection of datasets to put into
                         the second set.  There must be at least 2 datasets
                         in each of set1 (if used) and set2.
  -paired            = Specifies the use of a paired-sample t-test to
                         compare set1 and set2.  If this option is used,
                         set1 and set2 must have the same cardinality.
                   N.B.: A paired test is intended for use when the set1 and set2
                         dataset function values may be pairwise correlated.
                         If they are in fact uncorrelated, this test has less
                         statistical 'power' than the unpaired (default) t-test.
                         This loss of power is the price that is paid for
                         insurance against pairwise correlations.
  -unpooled          = Specifies that the variance estimates for set1 and
                         set2 be computed separately (not pooled together).
                         This only makes sense if -paired is NOT given.
                   N.B.: If this option is used, the number of degrees
                         of freedom per voxel is a variable, rather
                         than a constant.
  -dof_prefix ddd    = If '-unpooled' is also used, then a dataset with
                         prefix 'ddd' will be created that contains the
                         degrees of freedom (DOF) in each voxel.
                         You can convert the t-value in the -prefix
                         dataset to a z-score using the -dof_prefix dataset
                         using commands like so:
           3dcalc -a 'pname+orig[1]' -b ddd+orig \
                  -datum float -prefix ddd_zz -expr 'fitt_t2z(a,b)'
           3drefit -substatpar 0 fizt ddd_zz+orig
                         At present, AFNI is incapable of directly dealing
                         with datasets whose DOF parameter varies between
                         voxels.  Converting to a z-score (with no parameters)
                         is one way of getting around this difficulty.

  -voxel voxel       = like 3dANOVA, get screen output for a given voxel.
                         This is 1-based, as with 3dANOVA.

The -base1 or -set1 command line switches must follow all other options
(including those described below) except for the -set2 switch.

INPUT EDITING OPTIONS: The same as are available in 3dmerge.

OUTPUT OPTIONS: these options control the output files.
  -session  dirname  = Write output into given directory (default=./)
  -prefix   pname    = Use 'pname' for the output directory prefix
                       (default=tdif)
  -datum    type     = Use 'type' to store the output difference
                       in the means; 'type' may be short or float.
                       How the default is determined is described
                       in the notes below.

NOTES:
 ** The input datasets are specified by their .HEAD files,
      but their .BRIK files must exist also! This program cannot
      'warp-on-demand' from other datasets.
 ** This program cannot deal with time-dependent or complex-valued datasets!
 ** By default, the output dataset function values will be shorts if the
      first input dataset is byte- or short-valued; otherwise they will be
      floats.  This behavior may be overridden using the -datum option.
 ** In the -set1/-set2 input list, you can specify a collection of
      sub-bricks from a single dataset using a notation like
        datasetname+orig'[5-9]'
      (the single quotes are necessary).  If you want to use ALL the
      sub-bricks from a multi-volume dataset, you can't just give the
      dataset filename -- you have to use
        datasetname+orig'[0-$]'
      Otherwise, the program will reject the dataset as being too
      complicated for it to understand.  [New in July 2007]

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100..200>'                                {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

++ Compile date = Mar 13 2009




AFNI program: 3dvolreg
Usage: 3dvolreg [options] dataset
Registers each 3D sub-brick from the input dataset to the base brick.
'dataset' may contain a sub-brick selector list.

OPTIONS:
  -verbose        Print progress reports.  Use twice for LOTS of output.
  -Fourier        Perform the alignments using Fourier interpolation.
  -heptic         Use heptic polynomial interpolation.
  -quintic        Use quintic polynomical interpolation.
  -cubic          Use cubic polynomial interpolation.
                    Default = Fourier [slowest and most accurate interpolator]
  -clipit         Clips the values in each output sub-brick to be in the same
                    range as the corresponding input volume.
                    The interpolation schemes can produce values outside
                    the input range, which is sometimes annoying.
                    [16 Apr 2002: -clipit is now the default]
  -noclip         Turns off -clipit
  -zpad n         Zeropad around the edges by 'n' voxels during rotations
                    (these edge values will be stripped off in the output)
              N.B.: Unlike to3d, in this program '-zpad' adds zeros in
                     all directions.
              N.B.: The environment variable AFNI_ROTA_ZPAD can be used
                     to set a nonzero default value for this parameter.
  -prefix fname   Use 'fname' for the output dataset prefix.
                    The program tries not to overwrite an existing dataset.
                    Default = 'volreg'.
              N.B.: If the prefix is 'NULL', no output dataset will be written.

  -float          Force output dataset to be written in floating point format.
              N.B.: If the input dataset has scale factors attached to ANY
                    sub-bricks, then the output will always be written in
                    float format!

  -base n         Sets the base brick to be the 'n'th sub-brick
                    from the input dataset (indexing starts at 0).
                    Default = 0 (first sub-brick).
  -base 'bset[n]' Sets the base brick to be the 'n'th sub-brick
                    from the dataset specified by 'bset', as in
                       -base 'elvis+orig[4]'
                    The quotes are needed because the '[]' characters
                    are special to the shell.

  -dfile dname    Save the motion parameters in file 'dname'.
                    The output is in 9 ASCII formatted columns:

                    n  roll  pitch  yaw  dS  dL  dP  rmsold rmsnew

           where:   n     = sub-brick index
                    roll  = rotation about the I-S axis }
                    pitch = rotation about the R-L axis } degrees CCW
                    yaw   = rotation about the A-P axis }
                      dS  = displacement in the Superior direction  }
                      dL  = displacement in the Left direction      } mm
                      dP  = displacement in the Posterior direction }
                   rmsold = RMS difference between input brick and base brick
                   rmsnew = RMS difference between output brick and base brick
       N.B.: If the '-dfile' option is not given, the parameters aren't saved.
       N.B.: The motion parameters are those needed to bring the sub-brick
             back into alignment with the base.  In 3drotate, it is as if
             the following options were applied to each input sub-brick:
              -rotate 'roll'I 'pitch'R 'yaw'A  -ashift 'dS'S 'dL'L 'dP'P

  -1Dfile ename   Save the motion parameters ONLY in file 'ename'.
                    The output is in 6 ASCII formatted columns:

                    roll pitch yaw dS  dL  dP

                  This file can be used in FIM as an 'ort', to detrend
                  the data against correlation with the movements.
                  This type of analysis can be useful in removing
                  errors made in the interpolation.

  -1Dmatrix_save ff = Save the matrix transformation from base to input
                      coordinates in file 'ff' (1 row per sub-brick in
                      the input dataset).  If 'ff' does NOT end in '.1D',
                      then the program will append '.aff12.1D' to 'ff' to
                      make the output filename.
               *N.B.: This matrix is the coordinate transformation from base
                      to input DICOM coordinates.  To get the inverse matrix
                      (input to base), use the cat_matvec program, as in
                        cat_matvec fred.aff12.1D -I
               *N.B.: This matrix is the inverse of the matrix stored in
                      the output dataset VOLREG_MATVEC_* attributes.
                      The base-to-input convention followed with this
                      option corresponds to the convention in 3dAllineate.
               *N.B.: 3dvolreg does not have a '-1Dmatrix_apply' option.
                      See 3dAllineate for this function.  Also confer with
                      program cat_matvec.

  -rotcom         Write the fragmentary 3drotate commands needed to
                  perform the realignments to stdout; for example:
                    3drotate -rotate 7.2I 3.2R -5.7A -ashift 2.7S -3.8L 4.9P
                  The purpose of this is to make it easier to shift other
                  datasets using exactly the same parameters.

  -maxdisp      = Print the maximum displacement (in mm) for brain voxels.
                    ('Brain' here is defined by the same algorithm as used
                    in the command '3dAutomask -clfrac 0.33'; the displacement
                    for each non-interior point in this mask is calculated.)
                    If '-verbose' is given, the max displacement will be
                    printed to the screen for each sub-brick; otherwise,
                    just the overall maximum displacement will get output.
                    [-maxdisp is turned on by default]
  -nomaxdisp    = Do NOT calculate and print the maximum displacement.
                    [maybe it offends you in some theological sense?]
                    [maybe you have some real 'need for speed'?]
  -maxdisp1D mm = Do '-maxdisp' and also write the max displacement for each
                    sub-brick into file 'mm' in 1D (columnar) format.
                    You may find that graphing this file (cf. 1dplot)
                    is a useful diagnostic tool for your FMRI datasets.

  -tshift ii      If the input dataset is 3D+time and has slice-dependent
                  time-offsets (cf. the output of 3dinfo -v), then this
                  option tells 3dvolreg to time shift it to the average
                  slice time-offset prior to doing the spatial registration.
                  The integer 'ii' is the number of time points at the
                  beginning to ignore in the time shifting.  The results
                  should like running program 3dTshift first, then running
                  3dvolreg -- this is primarily a convenience option.
            N.B.: If the base brick is taken from this dataset, as in
                  '-base 4', then it will be the time shifted brick.
                  If for some bizarre reason this is undesirable, you
                  could use '-base this+orig[4]' instead.

  -rotparent rset
    Specifies that AFTER the registration algorithm finds the best
    transformation for each sub-brick of the input, an additional
    rotation+translation should be performed before computing the
    final output dataset; this extra transformation is taken from
    the first 3dvolreg transformation found in dataset 'rset'.
  -gridparent gset
    Specifies that the output dataset of 3dvolreg should be shifted to
    match the grid of dataset 'gset'.  Can only be used with -rotparent.
    This dataset should be one this is properly aligned with 'rset' when
    overlaid in AFNI.
  * If 'gset' has a different number of slices than the input dataset,
    then the output dataset will be zero-padded in the slice direction
    to match 'gset'.
  * These options are intended to be used to align datasets between sessions:
     S1 = SPGR from session 1    E1 = EPI from session 1
     S2 = SPGR from session 2    E2 = EPI from session 2
 3dvolreg -twopass -twodup -base S1+orig -prefix S2reg S2+orig
 3dvolreg -rotparent S2reg+orig -gridparent E1+orig -prefix E2reg \
          -base 4 E2+orig
     Each sub-brick in E2 is registered to sub-brick E2+orig[4], then the
     rotation from S2 to S2reg is also applied, which shifting+padding
     applied to properly overlap with E1.
  * A similar effect could be done by using commands
 3dvolreg -twopass -twodup -base S1+orig -prefix S2reg S2+orig
 3dvolreg -prefix E2tmp -base 4 E2+orig
 3drotate -rotparent S2reg+orig -gridparent E1+orig -prefix E2reg E2tmp+orig
    The principal difference is that the latter method results in E2
    being interpolated twice to make E2reg: once in the 3dvolreg run to
    produce E2tmp, then again when E2tmp is rotated to make E2reg.  Using
    3dvolreg with the -rotparent and -gridparent options simply skips the
    intermediate interpolation.

          *** Please read file README.registration for more   ***
          *** information on the use of 3dvolreg and 3drotate ***

 Algorithm: Iterated linearized weighted least squares to make each
              sub-brick as like as possible to the base brick.
              This method is useful for finding SMALL MOTIONS ONLY.
              See program 3drotate for the volume shift/rotate algorithm.
              The following options can be used to control the iterations:
                -maxite     m = Allow up to 'm' iterations for convergence
                                  [default = 19].
                -x_thresh   x = Iterations converge when maximum movement
                                  is less than 'x' voxels [default=0.020000],
                -rot_thresh r = And when maximum rotation is less than
                                  'r' degrees [default=0.030000].
                -delta      d = Distance, in voxel size, used to compute
                                  image derivatives using finite differences
                                  [default=0.700000].
                -final   mode = Do the final interpolation using the method
                                  defined by 'mode', which is one of the
                                  strings 'NN', 'cubic', 'quintic', 'heptic',
                                  or 'Fourier'
                                  [default=mode used to estimate parameters].
            -weight 'wset[n]' = Set the weighting applied to each voxel
                                  proportional to the brick specified here
                                  [default=smoothed base brick].
                                N.B.: if no weight is given, and -twopass is
                                  engaged, then the first pass weight is the
                                  blurred sum of the base brick and the first
                                  data brick to be registered.
                   -edging ee = Set the size of the region around the edges of
                                  the base volume where the default weight will
                                  be set to zero.  If 'ee' is a plain number,
                                  then it is a voxel count, giving the thickness
                                  along each face of the 3D brick.  If 'ee' is
                                  of the form '5%', then it is a fraction of
                                  of each brick size.  For example, '5%' of
                                  a 256x256x124 volume means that 13 voxels
                                  on each side of the xy-axes will get zero
                                  weight, and 6 along the z-axis.  If this
                                  option is not used, then 'ee' is read from
                                  the environment variable AFNI_VOLREG_EDGING.
                                  If that variable is not set, then 5% is used.
                                N.B.: This option has NO effect if the -weight
                                  option is used.
                                N.B.: The largest % value allowed is 25%.
                     -twopass = Do two passes of the registration algorithm:
                                 (1) with smoothed base and data bricks, with
                                     linear interpolation, to get a crude
                                     alignment, then
                                 (2) with the input base and data bricks, to
                                     get a fine alignment.
                                  This method is useful when aligning high-
                                  resolution datasets that may need to be
                                  moved more than a few voxels to be aligned.
                  -twoblur bb = 'bb' is the blurring factor for pass 1 of
                                  the -twopass registration.  This should be
                                  a number >= 2.0 (which is the default).
                                  Larger values would be reasonable if pass 1
                                  has to move the input dataset a long ways.
                                  Use '-verbose -verbose' to check on the
                                  iterative progress of the passes.
                                N.B.: when using -twopass, and you expect the
                                  data bricks to move a long ways, you might
                                  want to use '-heptic' rather than
                                  the default '-Fourier', since you can get
                                  wraparound from Fourier interpolation.
                      -twodup = If this option is set, along with -twopass,
                                  then the output dataset will have its
                                  xyz-axes origins reset to those of the
                                  base dataset.  This is equivalent to using
                                  '3drefit -duporigin' on the output dataset.
                       -sinit = When using -twopass registration on volumes
                                  whose magnitude differs significantly, the
                                  least squares fitting procedure is started
                                  by doing a zero-th pass estimate of the
                                  scale difference between the bricks.
                                  Use this option to turn this feature OFF.
              -coarse del num = When doing the first pass, the first step is
                                  to do a number of coarse shifts in order to
                                  find a starting point for the iterations.
                                  'del' is the size of these steps, in voxels;
                                  'num' is the number of these steps along
                                  each direction (+x,-x,+y,-y,+z,-z).  The
                                  default values are del=10 and num=2.  If
                                  you don't want this step performed, set
                                  num=0.  Note that the amount of computation
                                  grows as num**3, so don't increase num
                                  past 4, or the program will run forever!
                             N.B.: The 'del' parameter cannot be larger than
                                   10% of the smallest dimension of the input
                                   dataset.
              -coarserot        Also do a coarse search in angle for the
                                  starting point of the first pass.
              -nocoarserot      Don't search angles coarsely.
                                  [-coarserot is now the default - RWCox]
              -wtinp          = Use sub-brick[0] of the input dataset as the
                                  weight brick in the final registration pass.

 N.B.: * This program can consume VERY large quantities of memory.
          (Rule of thumb: 40 bytes per input voxel.)
          Use of '-verbose -verbose' will show the amount of workspace,
          and the steps used in each iteration.
       * ALWAYS check the results visually to make sure that the program
          wasn't trapped in a 'false optimum'.
       * The default rotation threshold is reasonable for 64x64 images.
          You may want to decrease it proportionally for larger datasets.
       * -twopass resets the -maxite parameter to 66; if you want to use
          a different value, use -maxite AFTER the -twopass option.
       * The -twopass option can be slow; several CPU minutes for a
          256x256x124 volume is a typical run time.
       * After registering high-resolution anatomicals, you may need to
          set their origins in 3D space to match.  This can be done using
          the '-duporigin' option to program 3drefit, or by using the
          '-twodup' option to this program.

++ Compile date = Mar 13 2009




AFNI program: 4swap
Usage: 4swap [-q] file ...
-- Swaps byte quadruples on the files listed.
   The -q option means to work quietly.



AFNI program: @4Daverage

**********************************
This script is somewhat outdated.
I suggest you use 3dMean which is
faster, meaner and not limited to
the alphabet.   ZSS, 03/14/03
**********************************

\012Usage : @4Daverage  <3D+t brik names...>
\012This script file uses 3Dcalc to compute average 3D+time bricks
example : @4Daverage NPt1av NPt1r1+orig NPt1r2+orig NPt1r3+orig
The output NPt1av+orig is the average of the three bricks
 NPt1r1+orig, NPt1r2+orig and NPt1r3+orig

You can use wildcards such as
 @4Daverage test ADzst2*.HEAD AFzst2r*.HEAD 
 Make sure you do not pass both .HEAD and .BRIK names.
 If you do so they will be counted twice.\012
The bricks to be averaged must be listed individually.
The total number of bricks that can be averaged at once (26)
is determined by 3dcalc.

\012Ziad Saad Nov 21 97, Marquette University
Modified to accept wild cards Jan 24 01, FIM/LBC/NIH
Ziad S. Saad (saadz@mail.nih.gov)



AFNI program: @AddEdge
A script to create composite edge-enhanced datasets and drive
 the AFNI interface to display the results
The script helps visualize registration results and is an important
 part of assessing image alignmnent

Basic usage:

   @AddEdge base_dset dset1 dset2 ....

   The output is a composite image of each dset nn with the base
   dataset where the composite image is the base dataset with the
   edges of each input dataset and its own edges

   Use without any parameters to drive AFNI's display to show
   the previously computed results from this script

   The script requires all input datasets to share the same grid, so
   a previous resample step may be required. Also it is recommended
   to use skull-stripped input datasets to avoid extraneous and
   extracranial edges.

A typical use may be to compare the effect of alignment
 as in this example for the alignment of anatomical dataset with an
 epi dataset:

   @AddEdge epi_rs+orig. anat_ns+orig anat_ns_al2epi+orig

 Note this particular kind of usage is included in the
   align_epi_anat.py script as the -AddEdge option

To examine results, open afni in listen mode and
rerun @AddEdge with no options.

   afni -niml -yesplugouts &
   @AddEdge

Using the typical case example above, the edges from the EPI
 are shown in cyan (light blue); the edges from the anat dataset
 are shown in purple. Overlapping edges are shown in dark purple
 Non-edge areas (most of the volume) are shown in a monochromatic
 amber color scale in the overlay layer of the AFNI image window
 The underlay contains the edge-enhanced anat dataset with edges
 of the anat dataset alone snd no EPI edges
By looking for significant overlap and close alignment of the
 edges of internal structures of the brain, one can assess the
 quality of the alignment.
The script prompts the user in the terminal window to cycle between
 the pre-aligned and post-aligned dataset views. Options are also
 given to save images as jpeg files or to quit the @AddEdge script

The colormap used is the AddEdge color scale which uses a monochrome
 amber for the overlay and purple, cyan and dark purple for edges

Several types of datasets are created by this script, but using the
 @AddEdge script without options is the best way to visualize these
 datasets. The result datasets can be grouped by their suffix as
 follows:

dset_nn_ec : edge composite image of dataset with its own edges
base_dset_dset_nn_ec : edge composite image of base dataset together
                 with the edges of the input dset_nn dataset
base_dset_e3, dset_nn_e3: edge-only datasets - used in single edge
                 display option

Available options (must precede the dataset names):

 -help         : this help screen
 -examinelist mmmm : use list of paired datasets from file mmmm
               (default is _ae.ExamineList.log)
 -ax_mont 'montformat': axial montage string (default='2x2:24')
 -ax_geom 'geomformat': axial image window geometry
               (default = '777x702+433+334')
 -sag_geom 'geomformat': sagittal image window geometry
               (default = '540x360+4+436')
 -layout mmmm  : use AFNI layout file mmmm for display
 -no_layout    : do not use layout. Use AFNI as it is open.
 -edge_percentile nn: specify edge threshold value (default=30%)
 -single_edge  : show only a single edge in composite image
 -opa          : set opacity of overlay (default=9 opaque)
 -keep_temp    : do not remove temporary files




AFNI program: @AfniOrient2RAImap
Usage: @AfniOrient2RAImap  .....
returns the index map fo the RAI directions

examples:
@AfniOrient2RAImap RAI
returns: 1 2 3
@AfniOrient2RAImap LSP
returns: -1 -3 -2

Ziad Saad (saadz@mail.nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland



AFNI program: @Align_Centers

Usage: @Align_Centers <-base BASE> <-dset DSET> [-no_cp] 
                     [-child CHILD_2 ... CHILD_N] 

   Moves the center of DSET to the center of BASE.
   By default, center refers to the center of the volume's voxel grid.
   Use -cm to use the brain's center of mass instead.

   AND/OR creates the transform matrix XFORM.1D needed for this shift.
   The transform can be used with 3dAllineate's -1Dmatrix_apply 
      3dAllineate   -1Dmatrix_apply XFORM.1D 
                    -prefix PREFIX -master BASE
                    -input DSET

   -1Dmat_only: Only output the transfrom needed to align
                the centers. Do not shift the volumes.
                The transform is named DSET_shft.1D
   -base BASE: Base volume, typically a template.
   -dset DSET: Typically an anatomical dset to be
               aligned to BASE.
   -child CHILD_'*': A bunch of datasets, originally
                     in register with DSET, that
                     should be shifted in the same
                     way.
   -no_cp: Do not create new data, shift existing ones
           This is a good option if you know what you 
           are doing. It will save you a lot of space.
           See NOTE below before using it.

    DSET and CHILD_'*' are typically all the datasets 
    from a particular scanning session that
    you want to eventually align to BASE.
    Such an operation is needed when DSET and CHILD_'*'
    overlap very little, if at all with BASE

 Note that you can specify *.HEAD for the children even 
 if the wildcard substitution would contain DSET 
 and possibly even BASE. The script will not process
 a dataset twice in one execution.

 Center options:
   -grid: (default) Center is that of the volume's grid
   -cm : Center is the center of mass of the volume.


   See also @Center_Distance

 NOTE: Running the script multiple times on the same data
       will cause a lot of trouble. That is why the default
       is to create new datasets as opposed to shifting the
       existing ones. Do not use -no_cp unless you know what
       you are doing.
       To undo errors caused by repeated executions
       look at the history of each dset and undo
       the excess 3drefit operations.

Requires 3drefit newer than Oct. 02/02.

Ziad Saad (saadz@mail.nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland




AFNI program: @Center_Distance

Usage: @Center_Distance <-dset DSET_1 DSET_2> 

   Returns the distance between the centers 
   of DSET_1 and DSET_2




AFNI program: @CheckForAfniDset
Usage: @CheckForAfniDset  .....
example: @CheckForAfniDset /Data/stuff/Hello+orig.HEAD
returns 0 if neither .HEAD nor .BRIK(.gz)(.bz2)(.Z) exist
          OR in the case of an error
             An error also sets the status flag
        1 if only .HEAD exists
        2 if both .HEAD and .BRIK(.gz)(.bz2)(.Z) exist
        3 if .nii dataset 
Ziad Saad (saadz@mail.nih.gov)
  SSCC/NIMH/ National Institutes of Health, Bethesda Maryland




AFNI program: @CommandGlobb

Usage: @CommandGlobb -com  -session  -newxt  -list   ...

 : The entire command line for the program desired
The command is best put between single quotes, do not use the \ to break a long line within the quotes
 : a list of bricks (or anything)
 : if the program requires a -prefix option, then you can specify the extension
 which will get appended to the Brick names before +orig
 : The output directory 

example
@CommandGlobb -com '3dinfo -v' -list *.HEAD
will execute 3dinfo -v on each of the A*.HEAD headers

@CommandGlobb -com '3dZeropad -z 4' -newxt _zpd4 -list ADzst*vr+orig.BRIK
will run 3dZeropad with the -z 4 option on all the bricks ADzst*vr+orig.BRIK

Ziad S. Saad (saadz@mail.nih.gov). FIM/LBC/NIMH/NIH. Wed Jan 24 



AFNI program: @DO.examples
Usage: @DO.examples 
A script to illustrate the use of Displayable Objects in SUMA.
Read this script and see SUMA's interactive help (ctrl+h) 
section for Ctrl+Alt+s for more details on SUMA's Displayable objects.




AFNI program: @DTI_studio_reposition
@DTI_studio_reposition  
This script reslices and repositions a DTI Studio Analyze format
volume to match an AFNI volume used as input data for DTI Studio.
Check realignment with AFNI to be sure all went well.

Example:
Fibers.hdr is an Analyze volume from DTI Studio that contains
   fiber tract volume data. The Analyze format data will have two files -
   Fibers.hdr with the header data and Fibers.img with the data
   DTI Studio allows saving the fibers as volumes in the Fiber panel,
   disk icon in the lower right
FA+orig is an AFNI volume to which to match the Analyze volume
To create an AFNI brick version of Fibers that is in alignment
 with FA+orig (output is Fibers+orig):

@DTI_studio_reposition Fibers.hdr FA+orig




AFNI program: @DoPerRoi.py
Error: Option -dsets needs at least 1 parameters
Parameter list has 0 parameters.
{'basename': , '-dsets': , '-areas_2': , '-areas': }
Option Name: basename
       Found: -1
       User Parameter List: None
       Default Parameter List: []

Option Name: -dsets
       Found: -1
       User Parameter List: []
       Default Parameter List: []

Option Name: -areas_2
       Found: -1
       User Parameter List: ['CA_ZOUZOU', 'areea_6', 'areea_4a', 'areea_4p']
       Default Parameter List: ['CA_ZOUZOU', 'areea_6', 'areea_4a', 'areea_4p']

Option Name: -areas
       Found: -1
       User Parameter List: ['CA_N27_MPM', 'area_6', 'area_4a', 'area_4p']
       Default Parameter List: ['CA_N27_MPM', 'area_6', 'area_4a', 'area_4p']




AFNI program: @FromRAI

Usage: @FromRAI <-xyz X Y Z> <-or ORIENT>

   Changes the RAI coordinates X Y Z to
   orientation ORIENT




AFNI program: @GetAfniBin
@GetAfniBin  : Returns path where afni executable resides.



AFNI program: @GetAfniDims
@GetAfniDims dset
Return the dimensions of dset




AFNI program: @GetAfniID
@GetAfniID DSET
 Returns the unique identifier of a dataset.



AFNI program: @GetAfniOrient
Usage: @GetAfniOrient  .....
example: @GetAfniOrient Hello+orig.HEAD
returns the orient code of Hello+orig.HEAD
Ziad Saad (saadz@mail.nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland



AFNI program: @GetAfniPrefix
Usage: @GetAfniPrefix  [Suffix]
example: @GetAfniPrefix /Data/stuff/Hello+orig.HEAD
returns the afni prefix of name (Hello)
Wildcards are treated as regular characters:
example: @GetAfniPrefix 'AAzst1r*+orig'
returns : AAzst1r*

If a Suffix string is specified, then it is
appended to the returned prefix.

Ziad Saad (saadz@mail.nih.gov)
  LBC/NIMH/ National Institutes of Health, Bethesda Maryland




AFNI program: @GetAfniRes
@GetAfniRes [-min|-max|-mean] dset
Return the voxel resolution of dset




AFNI program: @GetAfniView
Usage: @GetAfniView  .....
example: @GetAfniView /Data/stuff/Hello+orig.HEAD
returns the afni view of Name (+orig)
Wildcards are treated as regular characters:
example: @GetAfniView 'AAzst1r*+orig'
returns : +orig

Ziad Saad (saadz@mail.nih.gov)
LBC/NIMH/ National Institutes of Health, Bethesda Maryland




AFNI program: @IsoMasks
Parsing ...
Usage: @IsoMasks -mask DSET -isovals v1 v1 ...
Creates isosurfaces from isovolume envelopes.

For example, to create contours of TLRC regions:
 @IsoMasks -mask ~/abin/TTatlas+tlrc'[0]' -isovals  `count -digits 1 1 77` 




AFNI program: @NoExt
Usage: @NoExt    .....
example: @NoExt Hello.HEAD HEAD BRIK
returns Hello
@NoExt Hello.BRIK HEAD BRIK
returns Hello
@NoExt Hello.Jon HEAD BRIK
returns Hello.Jon

Ziad Saad (saadz@mail.nih.gov)
LBC/NIMH/ National Institutes of Health, Bethesda Maryland




AFNI program: @NoisySkullStrip

Usage: @NoisySkullStrip <-input ANAT> 
                     [-keep_tmp] [-3dSkullStrip_opts OPTS]

Strips the skull of anatomical datasets with low SNR
You can recognize such dataset by the presence of relatively
elevated (grayish) signal values outside the skull.

This script does some pre-processing before running 3dSkullStrip
If you're intrigued, read the code.

This script is experimental and has only been tested on a dozen nasty
datasets. So use it ONLY when you need it, i.e. when 3dSkullStrip 
fails on its own and you have low SNR

Examples of use:
   For a normal anatomy with low SNR
   @NoisySkullStrip -input anat+orig

   For an anatomy with lots of CSF and low SNR
   Note how 3dSkullStrip options are passed after -3dSkullStrip_opts
@NoisySkullStrip  -input old_anat+orig \
               -3dSkullStrip_opts \
                  -use_skull -blur_fwhm 1 -shrink_fac_bot_lim 0.4


Mandatory parameters:
   -input ANAT : The anatomical dataset
Optional parameters:
   -3dSkullStrip_opts SSOPTS: Anything following this option is passed
                              to 3dSkullStrip
   -keep_tmp: Do not erase temporary files at the end.

The script outputs the following:
   ANAT.ns  : A skull stripped version of ANAT
   ANAT.air and ANAT.skl: A couple of special masks

Do send me feedback on this script's performance.

Ziad S. Saad, March 28 08.
saadz@mail.nih.gov




AFNI program: @Purify_1D
Usage: @Purify_1D [<-sub SUB_STRING>] dset1 dset2 ...
Purifies a series of 1D files for faster I/O into matlab.
  -sub SUB_STRING: You can use the sub-brick selection
                   mode, a la AFNI, to output a select
                   number of columns. See Example below.
  -suf STRING:     STRING is attached to the output prefix
                   which is formed from the input names

Example:
    @Purify_1D -sub '[0,3]' somedataset.1D.dset

Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov




AFNI program: @RenamePanga

Usage: @RenamePanga   <# slices> <# reps> 
                   [-kp] [-i] [-oc] [-sp Pattern] [-od Output Directory]

Creates AFNI bricks from RealTime GE EPI series.

This script is designed to run from the directory where the famed RT image directories are copied to.
If the data were copied from fim3T-adw using @RTcp, this directory should be something like:
/mnt/arena/03/users/sdc-nfs/Data/RTime/2009.03.14///

 : (eg: 3) The directory number where the first image of the series is stored.
 : (eg: 19) The number of the first image in the series.
<# slices> : (eg: 18) The number of slices making up the imaged volume.
<# reps> : (eg: 160) The number of samples in your time series.
 : (eg: PolcCw) The prefix for the output brick.
                 Bricks are automatically saved into the output directory
                 Unless you use -kp option, bricks are automatically named
                 _r# where # is generated each time you 
                 run the script and successfully create a new brick.

Optional Parameters:
-i : Launches to3d in interactive mode. This allows you to double check the automated settings.
 -kp: Forces @RenamePanga to use the prefix you designate without modification.
 -oc: Performs outliers check. This is useful to do but it slows to3d down and
  maybe annoying when checking your data while scanning. If you choose -oc, the
  outliers are written to a .1D file and placed in the output directory.
 -sp Pattern: Sets the slice acquisition pattern. The default option is alt+z.
  see to3d -help for various acceptable options.
 -od : Directory where the output (bricks and 1D files) will
  be stored. The default directory is ./afni


A log file (MAPLOG_Panga) is created in the current directory.

Panga: A state of revenge.
***********
Dec 4 2001 Changes:
- No longer requires the program pad_str.
- Uses to3d to read geometric slice information.
- Allows for bypassing the default naming convention.
- You need to be running AFNI built after Dec 3 2001 to use this script.
- Swapping needs are now determined by to3d.
If to3d complains about not being able to determine swapping needs, check the data manually
- Geom parent option (-gp) has been removed.
- TR is no longer set from command line, it is obtained from the image headers.
Thanks to Jill W., Mike B. and Shruti J. for reporting bugs and testing the scripts.
***********

Usage: @RenamePanga   <# slices> <# reps> 
                   [-kp] [-i] [-oc] [-sp Pattern] [-od Output Directory]

 Version 3.2 (09/02/03)  Ziad Saad (saadz@mail.nih.gov) Dec 5 2001   SSCC/LBC/NIMH.



AFNI program: @SUMA_AlignToExperiment

Usage: 
@SUMA_AlignToExperiment <-exp_anat Experiment_Anatomy> <-surf_anat Surface_Anatomy> 
              [dxyz] [-wd] [-prefix PREFIX] [-EA_clip_below CLP]
              [-align_centers] [-ok_change_view] [-strip_skull WHICH]

Creates a version of Surface Anatomy that is registered to Experiment Anatomy.

Mandatory parameters:
<-exp_anat Experiment_Anatomy>: Name of high resolution anatomical data set in register 
        with experimental data.
<-surf_anat Surface_Anatomy> Path and Name of high resolution antomical data set used to 
        create the surface.

  NOTE: In the old usage, there were no -exp_anat and -surf_anat flags and the two 
  volumes had to appear first on the command line and in the proper order.

Optional parameters:
   [-dxyz DXYZ]: This optional parameter indicates that the anatomical 
        volumes must be downsampled to dxyz mm voxel resolution before 
        registration. That is only necessary if 3dvolreg runs out of memory.
        You MUST have 3dvolreg that comes with afni distributions newer than 
        version 2.45l. It contains an option for reducing memory usage and 
        thus allow the registration of large data sets.
   [-out_dxyz DXYZ]: Output the final aligned volume at a cubic voxelsize
                     of DXYZmm. The default is based on the grid of ExpVol.
   [-wd]: Use 3dWarpDrive's general affine transform (12 param) instead of 
        3dvolreg's 6 parameters.
        If the anatomical coverage differs markedly between 'Experiment 
        Anatomy' and 'Surface Anatomy', you might need to use -EA_clip_below 
        option or you could end up with a very distorted brain.
        The default now is to use -coarserot option with 3dWarpDrive, this
        should make the program more robust. If you want to try running without it
        the add -ncr with -wd
        I would be interested in examining cases where -wd option failed to 
        produce a good alignment.
   [-al]: Use 3dAllineate to do the 12 parameter alignment. Cost function
          is lpa
   [-al_opt 'Options for 3dAllineate']: Specify set of options between quotes
                                           to pass to 3dAllineate.   
   [-ok_change_view]: Be quiet when view of registered volume is changed
                      to match that of the Experiment_Anatomy, even when
                      rigid body registration is used.
   [-strip_skull WHICH]: Use 3dSkullStrip to remove non-brain tissue and 
                         potentially improve the alignment. WHICH can be
                         one of 'exp_anat' or 'both'. In the first case,
                         the skull is removed from Experiment_Anatomy
                         dataset. With 'both' the skull is removed from
                         Experiment_Anatomy and Surface_Anatomy.
   [-skull_strip_opt 'Options For 3dSkullStrip']: Pass the options between
                         quotes to 3dSkullStrip.
   [-align_centers]: Adds an additional transformation to align the volume
                     centers. This is a good option to use when volumes
                     are severely out of alignment.
   [-EA_clip_below CLP]: Set slices below CLPmm in 'Experiment Anatomy' to zero.
        Use this if the coverage of 'Experiment Anatomy' dataset
        extends far below the data in 'Surface Anatomy' dataset.
        To get the value of CLP, use AFNI to locate the slice
        below which you want to clip and set CLP to the z coordinate
        from AFNI's top left corner. Coordinate must be in RAI, DICOM.
   [-prefix PREFIX]: Use PREFIX for the output volume. Default is the prefix 
   [-surf_anat_followers Fdset1 Fdset2 ...]: Apply the same alignment
                transform to datasets Fdset1, Fdset2, etc.
                This must be the last option on the command line.
                All parameters following it are considered datasets.
                You can transform other follower dsets manually by
                executing: 
         3dAllineate -master Experiment_Anatomy \
                     -1Dmatrix_apply Surface_Anatomy_Alnd_Exp.A2E.1D \
                     -input Fdset   \
                     -prefix Fdset_Alnd_Exp+orig \
                     -final NN
   [-followers_interp KERNEL]: Set the interpolation mode for the 
                               follower datasets. Default is NN, which 
                               is appropriate for ROI datasets.
                               Allowed KERNEL values are:
                               NN, linear, cubic, or quintic
        of the 'Surface Anatomy' suffixed by _Alnd_Exp.
   [-keep_tmp]: Keep temporary files for debugging. Note that you should
                delete temporary files before rerunning the script.


NOTE: You must run the script from the directory where Experiment Anatomy resides.

Example 1: For datasets with no relative distortion and comparable coverage.
           Using 6 param. rigid body transform.
@SUMA_AlignToExperiment -exp_anat DemoSubj_spgrsa+orig. \
                        -surf_anat ../FreeSurfer/SUMA/DemoSubj_SurfVol+orig.

Example 2: For datasets with some distortion and different coverage.
           Using 12 param. transform and clipping of areas below cerebellum:
@SUMA_AlignToExperiment -exp_anat ABanat+orig. -surf_anat DemoSubj_SurfVol+orig. \
                       -wd -prefix DemoSubj_SurfVol_WD_Alnd_Exp \
                       -EA_clip_below -30

Example 3: For two monkey T1 volumes with very different resolutions and severe
           shading artifacts.
@SUMA_AlignToExperiment    -surf_anat MOanat+orig. -al \
                           -exp_anat MoExpanat+orig. \
                           -strip_skull both -skull_strip_opt -monkey \
                           -align_centers \
                           -out_dxyz 0.3
More help may be found at http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm

Ziad Saad (saadz@mail.nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland




AFNI program: @SUMA_FSvolToBRIK
Usage: @SUMA_FSvolToBRIK  
A script to convert COR- or .mgz files from FreeSurfer.
 DO NOT use this script for general purpose .mgz conversions
 Use mri_convert instead.
Example 1: Taking COR- images in mri/orig to BRIK volume
      @SUMA_FSvolToBRIK mri/orig test/cor_afni

Example 2: Taking .mgz volume to BRIK volume
      @SUMA_FSvolToBRIK mri/aseg.mgz test/aseg_afni

To view segmented volumes in AFNI, use the FreeSurfer
color scale by doing:
   Define Overlay --> Pos? (on)
   Choose continuous (**) colorscale
   Right Click on colorscale --> Choose Colorscale
   Select FreeSurfer_Seg_255
   Set Range to 255




AFNI program: @SUMA_Make_Spec_Caret

@SUMA_Make_Spec_Caret - prepare for surface viewing in SUMA

    This script was tested with Caret-5.2 surfaces.

    This script goes through the following steps:
      - determine the location of surfaces and 
        then AFNI volume data sets used to create them.
      - creation of left and right hemisphere SUMA spec files

      - all created files are stored in the directory where 
        surfaces are encountered

  Usage: @SUMA_Make_Spec_Caret [options] -sid SUBJECT_ID

  examples:

    @SUMA_Make_Spec_Caret -sid subject1
    @SUMA_Make_Spec_Caret -help
    @SUMA_Make_Spec_Caret -sfpath subject1/surface_stuff -sid subject1

  options:

    -help    : show this help information

    -debug LEVEL    : print debug information along the way
          e.g. -debug 1
          the default level is 0, max is 2

    -sfpath PATH    : path to directory containing 'SURFACES'
                      and AFNI volume used in creating the surfaces.
          e.g. -sfpath subject1/surface_models
          the default PATH value is './', the current directory

          This is generally the location of the 'SURFACES' directory,
          though having PATH end in SURFACES is OK.  

          Note: when this option is provided, all file/path
          messages will be with respect to this directory.


    -sid SUBJECT_ID : required subject ID for file naming


  notes:

    0. More help may be found at http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm
    1. Surface file names should look like the standard names used by Caret:
       Human.3dAnatomy.LR.Fiducial.2006-05-09.54773.coord
       Human.3dAnatomy.LR.CLOSED.2006-05-09.54773.topo
       Otherwise the script cannot detect them. You will need to decide which
       surface is the most recent (the best) and the script helps you by listing
       the available surfaces with the most recent one first.
       This sorting ususally works except when the time stamps on the surface files
       are messed up. In such a case you just need to know which one to use.
       Once the Fiducial surface is chosen, it's complimentary surfaces are selected
       using the node number in the file name.
    3. You can tailor the script to your needs. Just make sure you rename it or risk
       having your modifications overwritten with the next SUMA version you install.
    4. The script looks for Fiducial Raw VeryInflated Inflated
       surfaces, let us know if more need to be sought.
    5. The test data I had contained .R. and .LR. surfaces! I am not sure what .LR.
       means since the surfaces are for one hemisphere but the script will use
       these surfaces too.

     R. Reynolds (rickr@codon.nih.gov), Z. Saad (saadz@mail.nih.gov)




AFNI program: @SUMA_Make_Spec_FS

@SUMA_Make_Spec_FS - prepare for surface viewing in SUMA

    This script goes through the following steps:
      - verify existence of necessary programs 
        (afni, to3d, suma, mris_convert)
      - determine the location of surface and COR files
      - creation of ascii surface files via 'mris_convert'
      - creation of left and right hemisphere SUMA spec files
      - creation of an AFNI dataset from the COR files via 'to3d'
      - creation of AFNI datasets from various .mgz volumes created
        by FreeSurfer. The segmentation volumes with aseg in the 
        name are best viewed in AFNI with the FreeSurfer_Seg_255
        colormap. See bottom of @SUMA_FSvolToBRIK -help for more
        info.

      - all created files are stored in a new SUMA directory

  Usage: @SUMA_Make_Spec_FS [options] -sid SUBJECT_ID

  examples:

    @SUMA_Make_Spec_FS -sid subject1
    @SUMA_Make_Spec_FS -help
    @SUMA_Make_Spec_FS -fspath subject1/surface_stuff -sid subject1
    @SUMA_Make_Spec_FS -neuro -sid 3.14159265 -debug 1

  options:

    -help    : show this help information

    -debug LEVEL    : print debug information along the way
          e.g. -debug 1
          the default level is 0, max is 2

    -fspath PATH    : path to 'surf' and 'orig' directories
          e.g. -fspath subject1/surface_info
          the default PATH value is './', the current directory

          This is generally the location of the 'surf' directory,
          though having PATH end in surf is OK.  The mri/orig
          directory should also be located here.

          Note: when this option is provided, all file/path
          messages will be with respect to this directory.

    -neuro          : use neurological orientation
          e.g. -neuro
          the default is radiological orientation

          In the default radiological orientation, the subject's
          right is on the left side of the image.  In the
          neurological orientation, left is really left.

    -sid SUBJECT_ID : required subject ID for file naming


  notes:

    0. More help may be found at http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm
    1. Surface file names should look like 'lh.smoothwm'.
    2. Patches of surfaces need the word patch in their name, in
       order to use the correct option for 'mris_convert'.
    3. Flat surfaces must have .flat in their name.
    4. You can tailor the script to your needs. Just make sure you rename it or risk
       having your modifications overwritten with the next SUMA version you install.

     R. Reynolds (rickr@codon.nih.gov)
     Z. Saad (saadz@mail.nih.gov)
     M. Beauchamp (Michael.S.Beauchamp@uth.tmc.edu)




AFNI program: @SUMA_Make_Spec_SF

@SUMA_Make_Spec_SF - prepare for surface viewing in SUMA

Use @SUMA_Make_Spec_Caret for caret surfaces

    This script goes through the following steps:
      - determine the location of surfaces and 
        then AFNI volume data sets used to create them.
      - creation of left and right hemisphere SUMA spec files

      - all created files are stored in SURFACES directory

  Usage: @SUMA_Make_Spec_SF [options] -sid SUBJECT_ID

  examples:

    @SUMA_Make_Spec_SF -sid subject1
    @SUMA_Make_Spec_SF -help
    @SUMA_Make_Spec_SF -sfpath subject1/surface_stuff -sid subject1

  options:

    -help    : show this help information

    -debug LEVEL    : print debug information along the way
          e.g. -debug 1
          the default level is 0, max is 2

    -sfpath PATH    : path to directory containing 'SURFACES'
                      and AFNI volume used in creating the surfaces.
          e.g. -sfpath subject1/surface_models
          the default PATH value is './', the current directory

          This is generally the location of the 'SURFACES' directory,
          though having PATH end in SURFACES is OK.  

          Note: when this option is provided, all file/path
          messages will be with respect to this directory.


    -sid SUBJECT_ID : required subject ID for file naming


  notes:

    0. More help may be found at http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm
    1. Surface file names should look like the standard names used by SureFit:
       rw_1mmLPI.L.full.segment_vent_corr.fiducial.58064.coord
       Otherwise the script cannot detect them. You will need to decide which
       surface is the most recent (the best) and the script helps you by listing
       the available surfaces with the most recent one first.
       This sorting ususally works except when the time stamps on the surface files
       are messed up. In such a case you just need to know which one to use.
       Once the fiducial surface is chosen, it's complimentary surfaces are selected
       using the node number in the file name.
    3. You can tailor the script to your needs. Just make sure you rename it or risk
       having your modifications overwritten with the next SUMA version you install.

     R. Reynolds (rickr@codon.nih.gov), Z. Saad (saadz@mail.nih.gov)




AFNI program: @ScaleVolume

Usage: @ScaleVolume <-input DSET> <-prefix PREFIX>
                     [-perc_clip P0 P1] [-val_clip V0 V1]
Scale a voume so that its values range between V0 and V1
-val_clip V0 V1: Min and Max of output dset
                 Default V0 = 0 and V1 = 255
-perc_clip P0 P1: Set lowest P0 percentile to Min 
                  and highest P1 percentile to Max
                  Default P0 = 2 and P1 = 98




AFNI program: @ShowDynamicRange
Usage @ShowDynamicRange 
The script checks the dynamic range of the time series data
at locations inside the brain.

The input dataset is an epi timeseries that has just been assembled
from your reconstructed images

The output consists of the following:
- A dataset whose prefix ends with minpercchange
  wich shows the percent signal change that an increment of 1 digitized
  value in the time series corresponds to.
- A dataset whose prefix ends with .range
  which shows the number of discreet levels used to 
  represent the time series.

The scripts output the average range and the average %change corresponding
to a unit digitized signal

To be safe, one should have a dynamic range that does not introduce noise 
at the level of expected response differences between tasks.
For example, if a unit step corresponds to 0.3% signal change then you may
not be able to detect differences of comparable magnitude in the FMRI 
response to two tasks.
These differences may be obscured by digitization noise.




AFNI program: @Spharm.examples

Usage: @Spharm.examples
A script to demonstrate the usage of spherical harmonics decomposition 
with SUMA

To run it you will need some of SUMA's N27 tlrc surfaces, which can be 
downloaded from: http://afni.nimh.nih.gov/pub/dist/tgz/suma_tlrc.tgz
The surfaces needed are lh.pial.tlrc.ply, lh.smoothwm.tlrc.ply, lh.sphere.asc, and N27_lh_tlrc.spec

To change the parameter settings, make a copy of this script
and modify the section at the top called 'INIT_VARS'
If you do not make a copy of this script, future AFNI updates will
overwrite your changes.

         Ziad S. Saad               SSCC/NIMH/NIH




AFNI program: @SurfSmooth.HEAT_07.examples
Usage: @SurfSmooth.HEAT_07.examples 
A script to illustrate controlled blurring of data on the surface.
Requires archive: http://afni.nimh.nih.gov/pub/dist/edu/data/SUMA_demo.tgz




AFNI program: @TTxform_anat
A script to transform an antomical dataset
to match a template in TLRC space. 
Usage: @TTxform_anat [options] <-base template> <-input anat>
Mandatory parameters:
   -base template :  Skull-stripped volume in TLRC space (+tlrc)
   -input anat    :  Original (with skull) anatomical volume (+orig)
Optional parameters:
   -no_ss         :  Do not stip skull of input data set
                     (because skull has already been removed
                      or because template still has the skull)
   -keep_view     :  Do not mark output dataset as +tlrc
   -pad_base  MM  :  Pad the base dset by MM mm in each directions.
                     That is needed to  make sure that datasets
                     requiring wild rotations do not get cropped
                     Default is MM = 30
   -verb          :  Yakiti yak yak

Example:
@TTxform_anat -base N27_SurfVol_NoSkull+tlrc. -input DemoSubj_spgrsa+orig.




AFNI program: @ToRAI

Usage: @ToRAI <-xyz X Y Z> <-or ORIENT>

   Changes the ORIENT coordinates X Y Z to
   RAI




AFNI program: @UpdateAfni
Usage: @UpdateAfni
Updates AFNI on your computer using wget
If you are using the program for the first time,
you must add some info about your computer into the script
You can easily do so by modifying the template in the block SETDESTIN.
IMPORTANT: Rename this script once you modify it. Otherwise,  
it will get overwritten whenever you update your AFNI distribution.
Before the update begins, executables from the current version
are copied into $localBIN.bak directory
For more info, see:
http://afni.nimh.nih.gov/~cox/afni_wget.html
Ziad Saad (saadz@mail.nih.gov) SSCC/NIMH/NIH, Bethesda MD USA



AFNI program: @VolCenter

Usage: @VolCenter <-dset DSET> [-or ORIENT]

   Returns the center of volume DSET
   The default coordinate system of the center
   is the same as that of DSET, unless another
   coordinate system is specified with the 
   -or option

Example:
@VolCenter -dset Vol+orig.BRIK -or RAI
 outputs the center of Vol+orig in RAI coordinate system




AFNI program: @align_partial_oblique
Parsing ...
Usage 1: A script to align a full coverage T1 weighted non-oblique dataset
         to match a partial coverage T1 weighted non-oblique dataset 
         Alignment is done with a rotation and shift (6 parameters) transform
         only.

 Script is still in testing phase

   @align_partial_oblique [options] <-base FullCoverageT1> <-input PartialCoverageObliqueT1>
   Mandatory parameters:
      -base  FullCoverageT1:  Reference anatomical full coverage volume.

      -input  PartialCoverageObliqueT1:  The name says it all.

   Optional parameters:
      -suffix  SUF   :  Output dataset name is formed by adding SUF to
                        the prefix of the base dataset.
                        The default suffix is _alnd_PartialCoverageObliqueT1
      -keep_tmp      :  Keep temporary files.
      -clean         :  Clean all temp files, likely left from -keep_tmp
                        option then exit.
      -dxyz MM          : Cubic voxel size of output DSET in TLRC
                          space Default MM is 1. If you do not
                          want your output voxels to be cubic
                          Then use the -dx, -dy, -dz options below.
      -dx MX            : Size of voxel in the x direction
                          (Right-Left). Default is 1mm.
      -dy MY            : Size of voxel in the y direction
                          (Anterior-Posterior). Default is 1mm.
      -dz MZ            : Size of voxel in the z direction.
                          (Inferior-Superior).Default is 1mm.
   Example:
   @align_partial_oblique -base ah_SurfVol+orig. -input ah_T1W_anat+orig.


Written by Ziad S. Saad, for Ikuko (saadz@mail.nih.gov)
                        SSCC/NIMH/NIH/DHHS




AFNI program: @auto_align
Parsing ...
Beginnings of a script to improve alignment of EPI to anatomical data

   @auto_align <-input epi> <-base anat>
                 [-blr_input FWHM_input] [-blr_base FWHM_base] [-blr_all FWHM]
                 [-suffix SUF] [-keep_tmp] 

      -keep_tmp      :  Keep temporary files.
      -xform  XFORM  : Transform to use for warping:
                       Choose from affine_general or shift_rotate_scale
                       Default is affine_general but the script will
                       automatically try to use shift_rotate_scale 
                       if the alignment does not converge.




AFNI program: @auto_tlrc
Parsing ...
Usage 1: A script to transform an antomical dataset
         to match a template in TLRC space. 

   @auto_tlrc [options] <-base template> <-input anat>
   Mandatory parameters:
      -base template :  Reference anatomical volume in TLRC space (+tlrc).
                        Preferably, this reference volume should have had
                        the skull removed but that is not mandatory.
                        AFNI's distribution contains a few templates:
                        TT_N27+tlrc --> Single subject, skull stripped volume.
                                     This volume is also known as 
                                     N27_SurfVol_NoSkull+tlrc elsewhere in 
                                     AFNI and SUMA land.
                                     (www.loni.ucla.edu, www.bic.mni.mcgill.ca)
                                     This template has a full set of FreeSurfer
                                     (surfer.nmr.mgh.harvard.edu)
                                     surface models that can be used in SUMA. 
                                     For details, see Talairach-related link:
                                     http://afni.nimh.nih.gov/afni/suma
                        TT_icbm452+tlrc --> Average volume of 452 normal brains.
                                         Skull Stripped. (www.loni.ucla.edu)
                        TT_avg152T1+tlrc --> Average volume of 152 normal brains.
                                         Skull Stripped.(www.bic.mni.mcgill.ca)
                        TT_EPI+tlrc --> EPI template from spm2, masked as TT_avg152T1
                                        TT_avg152 and TT_EPI volume sources are from
                                        SPM's distribution. (www.fil.ion.ucl.ac.uk/spm/)

                        If you do not specify a path for the template, the script
                        will attempt to locate the template AFNI's binaries directory.

                        NOTE: These datasets have been slightly modified from
                              their original size to match the standard TLRC
                              dimensions (Jean Talairach and Pierre Tournoux
                              Co-Planar Stereotaxic Atlas of the Human Brain
                              Thieme Medical Publishers, New York, 1988). 
                              That was done for internal consistency in AFNI.
                              You may use the original form of these
                              volumes if you choose but your TLRC coordinates
                              will not be consistent with AFNI's TLRC database
                              (San Antonio Talairach Daemon database), for example.
      -input anat    :  Original anatomical volume (+orig).
                        The skull is removed by this script
                        unless instructed otherwise (-no_ss).
   Optional parameters:
      -no_ss         :  Do not strip skull of input data set
                        (because skull has already been removed
                        or because template still has the skull)
      NOTE: The -no_ss option is not all that optional.
         Here is a table of when you should and should not use -no_ss
   
                        Template          Template
                        WITH skull        WITHOUT skull
         Dset.
         WITH skull      -no_ss            xxx 
         
         WITHOUT skull   No Cigar          -no_ss
         
         Template means: Your template of choice
         Dset. means: Your anatomical dataset
         -no_ss means: Skull stripping should not be attempted on Dset
         xxx means: Don't put anything, the script will strip Dset
         No Cigar mean: Don't try that combination, it makes no sense.
               
      -pad_base  MM  :  Pad the base dset by MM mm in each directions.
                        That is needed to  make sure that datasets
                        requiring wild rotations do not get cropped.
                        Default is MM = 40.
                        If your output dataset is clipped, try increasing
                        MM to 50.000000 or 
                              60.000000.
                        If that does not help, make sure
                        that the skull-stripped volume has no clipping.
                        If it does, then the skull stripping needs to
                        be corrected. Feel free to report such instances
                        to the script's authors.
      -keep_tmp      :  Keep temporary files.
      -clean         :  Clean all temp files, likely left from -keep_tmp
                        option then exit.
      -xform  XFORM  : Transform to use for warping:
                       Choose from affine_general or shift_rotate_scale
                       Default is affine_general but the script will
                       automatically try to use shift_rotate_scale 
                       if the alignment does not converge.
      -no_avoid_eyes : An option that gets passed to 3dSkullStrip.
                       Use it when parts of the frontal lobes get clipped
                       See 3dSkullStrip -help for more details.
      -ncr           : 3dWarpDrive option -coarserot is now a default.
                       It will cause no harm, only good shall come of it.
                       -ncr is there however, should you choose NOT TO
                       want coarserot used for some reason
      -onepass       : Turns off -twopass option for 3dWarpDrive. This will
                       speed up the registration but it might fail if the 
                       datasets are far apart.          
      -twopass       : Opposite of -onepass, default.
      -maxite NITER  : Maximum number of iterations for 3dWarpDrive.
                       Note that the script will try to increase the 
                       number of iterations if needed. 
                       When the maximum number of iterations is reached
                       without meeting the convergence criteria,
                       the script will double the number of iterations
                       and try again. If the second pass still fails,
                       the script will stop unless the user specifies the
                       -OK_maxite option.
      -OK_maxite     : See -maxite option.
      -rigid_equiv   : Also output a the rigid-body version of the 
                       alignment. This would align the brain with
                       TLRC axis without any distortion. Note that
                       the resultant .Xrigid volume is NOT in TLRC
                       space. Do not use this option if you do not
                       know what to do with it!
                       For more information on how the rigid-body
                       equivalent transformation is obtained, see
                       cat_matvec -help 's output for the -P option. 

   Example:
   @auto_tlrc -base TT_N27+tlrc. -input SubjectHighRes+orig.
    (the output is named SubjectHighRes_at+TLRC, by default.
     See -suffix for more info.)

Usage 2: A script to transform any dataset by the same TLRC 
         transform obtained with @auto_tlrc in Usage 1 mode

         Note: You can now also use adwarp instead.

   @auto_tlrc [options] <-apar TLRC_parent> <-input DSET>
   Mandatory parameters:
      -apar TLRC_parent : An anatomical dataset in tlrc space
                          created using Usage 1 of @auto_tlrc
                          From the example for usage 1, TLRC_parent
                          would be: SubjectHighRes_at+TLRC
      -input DSET       : Dataset (typically EPI time series or
                          statistical datset) to transform to
                          tlrc space per the xform in TLRC_parent
      -dxyz MM          : Cubic voxel size of output DSET in TLRC
                          space Default MM is 1. If you do not
                          want your output voxels to be cubic
                          Then use the -dx, -dy, -dz options below.
      -dx MX            : Size of voxel in the x direction
                          (Right-Left). Default is 1mm.
      -dy MY            : Size of voxel in the y direction
                          (Anterior-Posterior). Default is 1mm.
      -dz MZ            : Size of voxel in the z direction.
                          (Inferior-Superior).Default is 1mm.
   Optional parameters:
      -pad_input  MM    :  Pad the input DSET by MM mm in each direction.
                        That is needed to  make sure that datasets
                        requiring wild rotations do not get cropped.
                        Default is MM = 40.
                        If your output dataset is clipped, try increasing
                        MM to 50.000000 or 
                              60.000000.
                        If that does not help, report the
                        problem to the script's authors.

   Example:
   @auto_tlrc  -apar SubjectHighRes_at+tlrc. \
                  -input Subject_EPI+orig. -dxyz 3
    (the output is named Subject_EPI_at+TLRC, by default.

Common Optional parameters:
   -rmode     MODE:  Resampling mode. Choose from:
                     linear, cubic, NN or quintic .
                     Default for 'Usage 1' is quintic
                     Default for 'Usage 2' is quintic
   -suffix    SUF :  Name the output dataset by append SUF 
                     to the prefix of the input data for the output.
                     Default for SUF is _at
              NOTE:  You can now set SUF to 'none' or 'NONE' and enable
                     afni's warp on demand features.
   -keep_view     :  Do not mark output dataset as +tlrc
   -verb          :  Yakiti yak yak


When you're down and troubled and you need a helping hand:
   1- Oh my God! The brain is horribly distorted (by Jason Stein):
      The probable cause is a failure of 3dWarpDrive to converge.
      In that case, rerun the script with the option 
      -xform shift_rotate_scale. That usually takes care of it.
      Update:
      The script now has a mechanism for detecting cases 
      where convergence is not reached and it will automatically
      change -xform to fix the problem. So you should see very 
      few such cases. If you do, check the skull stripping
      step for major errors and if none are found send the
      authors a copy of the command you used, the input and base
      data and they'll look into it.
   2- Parts of the frontal cortex are clipped in the output:
      That is likely caused by aggressive skull stripping.
      When that happens, use the -no_avoid_eyes option.
   3- Other parts of the brain are missing:
      Examine the skull stripped version of the brain
      If the source of the problem is with the stripping,
      then you'll need to run 3dSkullStrip manually and 
      select the proper options for that dataset.
      Once you have a satisfactorily stripped brain, use that
      version as input to @auto_tlrc along with the -no_ss option.
   4- Skull stripped dataset looks OK, but TLRC output is clipped.
      Increase the padding from the default value by little more 
      than the size of the clipping observed. (see -pad_* 
      options above)
   5- The high-res anatomical ends up at a lower resolution: 
      That is because your template is at a lower resolution.
      To preserve (or control) the resolution of your input,
      run @auto_tlrc in usage 2 mode and set the resolution
      of the output with the -d* options.
   6- I want the skulled anatomical, not just the stripped
      anatomical in TLRC space:
      Use @auto_tlrc in usage 2 mode.
   7- What if I want to warp EPI data directly into TLRC space?
      If you have an EPI template in TLRC space you can use it
      as the base in @auto_tlrc, usage 1 mode. You can use whatever
      you want as a template. Just make sure you are warping
      apples to oranges, not apples to bananas for example.
   8- Bad alignment still:
      Check that the center of your input data set is not too
      far off from that of the template. Centers (not origins)
      of the templates we have are close to 0, 0, 0. If your
      input dataset is 100s of mm off center then the alignment
      will fail. The solution is to shift all of the input data
      in your session by an equal amount, to get the centers closer
      to zero. For example, say the center of your subject's volumes
      is around 100, 100, 100. To shift the centers close to 0, 0, 0 do:
      3drefit -dxorign -100 -dyorign -100 -dzorign -100 Subject_Data+orig
      Then use @auto_tlrc on the shifted datasets.
      Take care not to shift datasets from the same session by differing
      amounts as they will no longer be in alignment.

Written by Ziad S. Saad (saadz@mail.nih.gov)
                        SSCC/NIMH/NIH/DHHS




AFNI program: @clip_volume
Usage 1: A script to clip regions of a volume

   @clip_volume <-input VOL> <-below Zmm> [ [-and/-or] <-above Zmm> ]

   Mandatory parameters:
      -input VOL: Volume to clip
    + At least one of the options below:
      -below Zmm: Set to 0 slices below Zmm
                  Zmm (and all other coordinates) are in RAI
                  as displayed by AFNI on the top left corner
                  of the AFNI controller
      -above Zmm: Set to 0 slices above Zmm
      -left  Xmm: Set to 0 slices left of Xmm
      -right  Xmm: Set to 0 slices right of Xmm
      -anterior Ymm: Set to 0 slices anterior to Ymm
      -posterior Ymm: Set to 0 slices posterior to Ymm

    Optional parameters:
      -and (default): Combine with next clipping planes using 'and'
      -or           : Combine with next clipping planes using 'or'
      -verb         : Verbose, show command
      -crop         : Crop the output volume
      -prefix PRFX  : Use PRFX for output prefix. Default is the 
                      input prefix with _clp suffixed to it.

Example:
@clip_volume -below -30 -above 53 -left 20 -right -13 -anterior -15 \
             -posterior 42 -input ABanat+orig. -verb -prefix sample

Written by Ziad S. Saad (saadz@mail.nih.gov)
                        SSCC/NIMH/NIH/DHHS




AFNI program: @fast_roi
Usage: @fast_roi <-region REGION1> [<-region REGION2> ...]
                     <-base TLRC_BASE> <-anat ANAT> 
                     <-roi_grid GRID >
                     <-prefix PREFIX >
                     [-time] [-help]
Creates Atlas-based ROI masked in ANAT's original space.
The script is meant to work rapidly for realtime fmri applications
Parameters:
  -region REGION: Symbolic atlas-based region name. 
                  See whereami -help for details.
                 You can use repeated instances of this option
                 to specify a mask of numerous regions.
                 Each region is assigned a power of 2 integer
                 in the output mask
  -base TLRC_BASE:  Name of reference TLRC volume. See @auto_tlrc
                    for more details on this option. Note that
                    for the purposes of speeding up the process,
                    you might want to create a lower resolution
                    version of the templates in the AFNI. In the
                    example shown below, TT_N27_r2+tlrc was created
                    with: 
           3dresample  -dxyz 2 2 2 -rmode Li -prefix ./TT_N27_r2 \
                       -input /var/www/html/pub/dist/bin/linux_gcc32/TT_N27+tlrc. 
                    where TT_N27+tlrc is usually in the directory 
                    under which afni resides.
  -anat ANAT: Anat is the volume to be put in std space. It does not
              need to be a T1 weighted volume but you need to choose
              a similarly weighted TLRC_BASE.
  -roi_grid GRID: The volume that defines the final ROI's grid.
  -prefix PREFIX: PREFIX is used to tag the names the ROIs output.
  -time: A flag to make the script output elapsed time reports.
  -help: Output this message.

The ROI of interest is in a volume called ROI.PREFIX+orig.

The script follows the following steps:
  1- Strip skull off of ANAT+orig 
     Output is called nosk.ANAT+orig and is reused if present.
  2- Transform nosk.ANAT+orig to TLRC space.
     Output is called nosk.ANAT+tlrc and is reused if present.
  3- Create ROI in TLRC space using 3dcalc.
     Output is ROIt.PREFIX+tlrc and is overwritten if present.
  4- Create ROI in GRID's orig space using 3dFractionize.
     Output is ROI.PREFIX+orig and is overwritten if present.

Examples ( require AFNI_data3/afni, and 
           3dresample's output from command shown above):
     @fast_roi  -region CA_N27_ML::Hip -region CA_N27_ML::Amygda \
                 -base TT_N27_r2+tlrc. -anat anat1+orig.HEAD  \
                 -roi_grid epi_r1+orig -prefix toy -time

    If you want another ROI given the same -anat and -base volumes:
     @fast_roi  -region CA_N27_ML::Superior_Temporal_Gyrus \
                 -region CA_N27_ML::Putamen \
                 -base TT_N27_r2+tlrc. -anat anat1+orig.HEAD  \
                 -roi_grid epi_r1+orig -prefix toy -time




AFNI program: @fix_FSsphere

Usage: @fix_FSsphere <-spec SPEC> <-sphere SPHERE.asc>
                     [-niter NITER] [-lim LIM] [-keep_temp]
                     [-project_first]

   Fixes errors in FreeSurfer spherical surfaces.
   Mandatory parameters:
   -spec SPEC: Spec file
   -sphere SPHERE.asc: SPHERE.asc is the sphere to be used.
   Optional parameters:
   -niter NITER: Number of local smoothing operations.
                 Default is 3000
   -lim LIM: Extent, in mm, by which troubled sections 
             are fattened. Default is 6
   -project_first: Project to a sphere, before smoothing.
                   Default is: 0

   Output:
   Corrected surface is called SPHERE_fxd.asc

Example:
@fix_FSsphere -spec ./2005-10-01-km_rh.spec -sphere ./rh.sphere.asc




AFNI program: @float_fix

Usage: @float_fix File1 File2 ...

   Check whether the input files have any IEEE floating
   point numbers for illegal values: infinities and
   not-a-number (NaN) values.

 NOTE: Wildcard can be used when specifying filenames. However
       the filenames have to end up with .HEAD. For example
       @float_fix Mozart*.HEAD

Gang Chen (gangchen@mail.nih.gov) and Ziad Saad (saadz@nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland
01/24/2007




AFNI program: @isOblique
Usage: @isOblique  .....
example: @isOblique Hello+orig.HEAD
returns 1 if Hello+orig.HEAD is oblique
        0 if Hello+orig.HEAD is plumb.
Ziad Saad (saadz@mail.nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland



AFNI program: @make_plug_diff

Usage: @make_plug_diff -vtk VTKDIR -xm XMDIR -asrc ASRCDIR -abin ABINDIR 
Compiles AFNI's diffusion plugin.  
I used it as a way to log what is needed to compile the plugin.
We should work closely with Greg Balls and Larry Frank to make the
need for this script obsolete
 Options:
   -comments: output comments only
   -linux: flag for doing linuxy things 
   -vtk VTKDIR: Directory where vtk is installed
   -xm XMDIR: Directory where motif is installed
   -asrc ASRCDIR: Full path to AFNI's src/ directory 
   -abin ABINDIR: Path, relative to ASRCDIR, to abin
   -diff DIFFDIR: name of directory containing diffusion code

Sample compilation on GIMLI (OSX 10.5)
   @make_plug_diff         -vtk /sw    -xm /sw  \
                           -asrc /Users/ziad/b.AFNI.now/src \
                           -abin ../abin  -diff afni-diff-plugin-0.86

Sample compilation on linux (FC 10)
   @make_plug_diff         -xm /usr -asrc /home/ziad/b.AFNI.now/src \
                           -abin ../abin -diff afni-diff-plugin-0.86 \
                           -linux




AFNI program: @make_stim_file

@make_stim_file - create a time series file, suitable for 3dDeconvolve

    This script reads in column headers and stimulus times for
    each header (integers), and computes a 'binary' file (all
    0s and 1s) with column headers, suitable for use as input to
    3dDeconvolve.

    The user must specify an output file on the command line (using
    -outfile), and may specify a maximum repetition number for rows
    of output (using -maxreps).
------------------------------
  Usage: @make_stim_file [options] -outfile OUTFILE

  examples:

    @make_stim_file -outfile green_n_gold
    @make_stim_file -outfile green_n_gold < my_input_file
    @make_stim_file -maxreps 200 -outfile green_n_gold -headers
    @make_stim_file -help
    @make_stim_file -maxreps 200 -outfile green_n_gold -debug 1
------------------------------
  options:

    -help            : show this help information

    -debug LEVEL     : print debug information along the way
          e.g. -debug 1
          the default is 0, max is 2

    -outfile OUTFILE : (required) results are sent to this output file
          e.g. -outfile green.n.gold.out

    -maxreps REPS    : use REPS as the maximum repeptition time
          e.g. -maxreps 200
          the default is to use the maximum rep time from the input

          This option basically pads the output columns with 0s,
          so that each column has REPS rows (of 1s and 0s).

    -no_headers      : do not include headers in output file
          e.g. -no_headers
          the default is print column headers (# commented out)

    -zero_based      : consider stim times as zero-based numbers
          e.g. -zero_based
          the default is 1-based (probably a bad choice...)


------------------------------
  Notes:

    1. It is probably easiest to use redirection from an input file
       for execution of the program.  That way, mistakes can be more
       easily fixed and retried.  See 'Sample execution 2'.

    2. Since most people start off with stimulus data in colums, and
       since this program requires input in rows for each header, it
       may be easiest to go through a few initial steps:
           - make sure all data is in integer form
           - make sure all blank spaces are filled with 0
           - save the file to an ascii data file (without headers)
           - use AFNI program '1dtranspose' to convert column data
             to row format
           - add the column headers back to the top of the ascii file

    3. The -maxreps option is recommended when using redirection, so
       that the user does not have to add the value to the bottom of
       the file.
------------------------------
  Sample execution 1: (typing input on command line)

    a. executing the following command:

       @make_stim_file -outfile red_blue_out

    b. and providing input data as follows:

       headers -> red blue
       'red' -> 2 4
       'blue' -> 2 3 5
       maxreps -> 6

    c. will produce 'red_blue_out', containing:

       red blue
       0 0
       1 1
       0 1
       1 0
       0 1
       0 0
------------------------------
  Sample execution 2: (using redirection)

    a. given input file 'my_input_file': (a text file with input data)

       red blue
       2 4
       2 3 5
       6

    b. run the script using redirection with -maxreps option

      @make_stim_file -maxreps 6 -outfile red_blue_out < my_input_file

    c. now there exists output file 'red_blue_out':

       red blue
       0 0
       1 1
       0 1
       1 0
       0 1
       0 0
------------------------------
  R. Reynolds
------------------------------



AFNI program: @np

Usage: @np 

 Finds an appropriate new prefix to use, given the files
 you already have in your directory. 
 Use this script to automatically create a valid prefix
 when you are repeatedly running similar commands but
 do not want to delete previous output.

 In addition to checking for valid AFNI prefix,
 the script will look for matching files with extensions:
    1D 1D.dset m nii asc ply 1D.coord 1D.topo coord topo srf 

 Script is slow, it is for lazy people.




AFNI program: @parse_afni_name
Usage 1: A script to parse an AFNI name

   @parse_afni_name 

Outputs the path, prefix, view and sub-brick selection string.




AFNI program: @parse_name
Usage 1: A script to parse an filename

   @parse_name 

Outputs the path, prefix and extension strings.




AFNI program: @statauxcode


AFNI program: AFNI.afnirc
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 1: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 2: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 3: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 58: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 59: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 61: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 62: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 63: //: is a directory
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 64: syntax error near unexpected token `('
/var/www/html/pub/dist/bin/linux_gcc32/AFNI.afnirc: line 64: `// AFNI_YESPLUGOUTS           = NO   // YES==enable plugouts (POs)'



AFNI program: AFNI_Batch_R
R: Command not found.



AFNI program: AlphaSim
++ AlphaSim: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
This program performs alpha probability simulations.  

Usage: 
AlphaSim 
-nx n1        n1 = number of voxels along x-axis                      
-ny n2        n2 = number of voxels along y-axis                      
-nz n3        n3 = number of voxels along z-axis                      
-dx d1        d1 = voxel size (mm) along x-axis                       
-dy d2        d2 = voxel size (mm) along y-axis                       
-dz d3        d3 = voxel size (mm) along z-axis                       
-nxyz n1 n2 n3   = give all 3 grid dimensions at once                 
-dxyz d1 d2 d3   = give all 3 voxel sizes at once                     
[-mask mset]      Use the 0 sub-brick of dataset 'mset' as a mask     
                    to indicate which voxels to analyze (a sub-brick  
                    selector is allowed)  [default = use all voxels]  
                  Note:  The -mask command also REPLACES the          
                         -nx, -ny, -nz, -dx, -dy, and -dz commands,   
                         and takes the volume dimensions from 'mset'. 
[-fwhm s]     s  = Gaussian filter width (FWHM, in mm)                
[-fwhmx sx]   sx = Gaussian filter width, x-axis (FWHM)               
[-fwhmy sy]   sy = Gaussian filter width, y-axis (FWHM)               
[-fwhmz sz]   sz = Gaussian filter width, z-axis (FWHM)               
[-sigma s]    s  = Gaussian filter width (1 sigma, in mm)             
[-sigmax sx]  sx = Gaussian filter width, x-axis (1 sigma)            
[-sigmay sy]  sy = Gaussian filter width, y-axis (1 sigma)            
[-sigmaz sz]  sz = Gaussian filter width, z-axis (1 sigma)            

[-power]      perform statistical power calculations                  
[-ax n1]      n1 = extent of active region (in voxels) along x-axis   
[-ay n2]      n2 = extent of active region (in voxels) along y-axis   
[-az n3]      n3 = extent of active region (in voxels) along z-axis   
[-zsep z]     z = z-score separation between signal and noise         

[-rmm r]      r  = cluster connection radius (mm)                     
                   Default is nearest neighbor connection only.       
-pthr p       p  = individual voxel threshold probability             
-iter n       n  = number of Monte Carlo simulations                  
[-quiet]      suppress lengthy per-iteration screen output            
[-out file]   file = name of output file [default value = screen]     
[-max_clust_size size]  size = maximum allowed voxels in a cluster    
[-seed S]     S  = random number seed
                   default seed = 1234567
                   if seed=0, then program will randomize it
[-fast]       Use a faster random number generator:
                Can speed program up by about a factor of 2,
                but detailed results will differ slightly since
                a different sequence of random values will be used.

Unix environment variables you can use:
---------------------------------------
 Set AFNI_BLUR_FFT to YES to require blurring be done with FFTs
   (the oldest way, and slowest).
 Set AFNI_BLUR_FFT to NO and AFNI_BLUR_FIROLD to YES to require
   blurring to be done with the old (crude) FIR code (not advised).
 If neither of these are set, then blurring is done using the newer
   (more accurate) FIR code (recommended).
 Results will differ in detail depending on the blurring method
   used to generate the simulated noise fields.

SAMPLE OUTPUT:
--------------
 AlphaSim -nxyz 64 64 10 -dxyz 3 3 3 -iter 10000 -pthr 0.004 -fwhm 3 -quiet -fast

Cl Size     Frequency    CumuProp     p/Voxel   Max Freq       Alpha
      1       1316125    0.898079  0.00401170          0    1.000000
      2        126353    0.984298  0.00079851       1023    1.000000
      3         18814    0.997136  0.00018155       5577    0.897700
      4          3317    0.999400  0.00004375       2557    0.340000
      5           688    0.999869  0.00001136        653    0.084300
      6           150    0.999971  0.00000296        148    0.019000
      7            29    0.999991  0.00000076         29    0.004200
      8             8    0.999997  0.00000027          8    0.001300
      9             5    1.000000  0.00000011          5    0.000500

 That is, thresholded random noise alone (no signal) would produce a
 cluster of size 6 or larger 1.9% (Alpha) of the time, in a 64x64x64
 volume with cubical 3 mm voxels and a FHWM noise smoothness of 3 mm.

N.B.: If you run the exact command above, you will get slightly
 different results, due to variations in the random numbers generated
 in the simulations.




AFNI program: AnalyzeTrace

Usage: A program to analyze SUMA (and AFNI's perhaps) stack output
       The program can detect functions that return with RETURN without
       bothering to go on the stack.
   AnaylzeTrace [options] FILE 
       where FILE is obtained by redirecting program's trace output.
Optional Param:
   -max_func_lines N: Set the maximum number of code lines before a function
                      returns. Default is no limit.
   -suma_c: FILE is a SUMA_*.c file. It is analyzed for functions that use SUMA_ RETURN 
            (typo on purpose to avoid being caught here) without ENTRY
       Note: The file for this program has special strings (in comments at times)
            to avoid false alarms when processing it.
            
   -max_err MAX_ERR: Stop after encountering MAX_ERR errors
                     reported in log. Default is 5.
                     Error key terms are:
                     'Error', 'error', 'corruption'

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: CompareSurfaces

   Usage:    CompareSurfaces 
             -spec 
             -hemi 
             -sv1 
             -sv2  
             [-prefix ]

   NOTE: This program is now superseded by SurfToSurf

   This program calculates the distance, at each node in Surface 1 (S1) to Surface 2 (S2)
   The distances are computed along the local surface normal at each node in S1.
   S1 and S2 are the first and second surfaces encountered in the spec file, respectively.

   -spec : File containing surface specification. This file is typically 
                      generated by @SUMA_Make_Spec_FS (for FreeSurfer surfaces) or 
                      @SUMA_Make_Spec_SF (for SureFit surfaces).
   -hemi : specify the hemisphere being processed 
   -sv1 :volume parent BRIK for first surface 
   -sv2 :volume parent BRIK for second surface 

Optional parameters:
   [-prefix ]: Prefix for distance and node color output files.
                           Existing file will not be overwritten.
   [-onenode ]: output results for node index only. 
                       This option is for debugging.
   [-noderange  ]: output results from node istart to node istop only. 
                                  This option is for debugging.
   NOTE: -noderange and -onenode are mutually exclusive
   [-nocons]: Skip mesh orientation consistency check.
              This speeds up the start time so it is useful
              for debugging runs.

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

   For more help: http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm


   If you can't get help here, please get help somewhere.
++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009


    Shruti Japee LBC/NIMH/NIH shruti@codon.nih.gov Ziad S. Saad SSSC/NIMH/NIH saadz@mail.nih.gov 




AFNI program: ConvertDset
Usage: 
  ConvertDset -o_TYPE -input DSET [-i_TYPE] [-prefix OUT_PREF]
  Converts a surface dataset from one format to another.
  Mandatory parameters:
     -o_TYPE: TYPE of output datasets
              where TYPE is one of:
           niml_asc (or niml): for ASCII niml format.
           niml_bi:            for BINARY niml format.
           1D:                 for AFNI's 1D ascii format.
           1Dp:                like 1D but with no comments
                               or other 1D formatting gimmicks.
           1Dpt:               like 1Dp but transpose the output.
           gii:                GIFTI format default .
           gii_asc:            GIFTI format with ascii DataArrays.
           gii_b64:            GIFTI format with Base 64 encoded DataArrays.
           gii_b64gz:          GIFTI format with B64 enconding and gzipping.
         For stderr and stdout output use one of:
           1D_stderr, 1D_stdout, niml_stderr, or niml_stdout, 
           1Dp_stdout, 1Dp_stderr, 1Dpt_stdout, 1Dpt_stderr
         Actually, this parameter is not that mandatory, the program
         can look at extensions on the prefix to guess the output
         format. If the prefix has no extension and o_TYPE is not
         specified, then the output format is the same as that of the
         input.
     -input DSET: Input dataset to be converted.
                  See more on input datasets below.
  Optional parameters:
     -add_node_index: Add a node index element if one does not exist
                      in the input dset. With this option, the indexing
                      is assumed to be implicit (0,1,2,3.... for rows 0,1
                      2,3,...). If that is not the case, use -node_index_1D
                      option below. 
     -node_index_1D INDEX.1D: Specify file containing node indices
                              Use this to provide node indices with 
                              a .1D dset. In many cases for .1D data
                              this option is DSET.1D'[0]'
     -node_select_1D MASK.1D: Specify the nodes you want to keep in the
                              output.
     -prepend_node_index_1D: Add a node index column to the data, rather
                             than keep it as part of the metadata.
     -pad_to_node max_index: Output a full dset from node 0 
                            to node max_index (a total of 
                            max_index + 1 nodes). Nodes that
                            get no value from input DSET are
                            assigned a value of 0
                            Notice that padding get done at the
                            very end.

     -i_TYPE: TYPE of input datasets
              where TYPE is one of:
           niml: for niml data sets.
           1D:   for AFNI's 1D ascii format.
           dx: OpenDX format, expects to work on 1st
               object only.
           If no format is specified, the program will 
           guess using the extension first and the file
           content next. However the latter operation might 
           slow operations down considerably.
     -prefix OUT_PREF: Output prefix for data set.
                       Default is something based
                       on the input prefix.
  Notes:
     -This program will not overwrite pre-existing files.
     -The new data set is given a new idcode.

  SUMA dataset input options:
      -input DSET: Read DSET1 as input.
                   In programs accepting multiple input datasets
                   you can use -input DSET1 -input DSET2 or 
                   input DSET1 DSET2 ...
       NOTE: Selecting subsets of a dataset:
             Much like in AFNI, you can select subsets of a dataset
             by adding qualifiers to DSET.
           Append #SEL# to select certain nodes.
           Append [SEL] to select certain columns.
           Append {SEL} to select certain rows.
           The format of SEL is the same as in AFNI, see section:
           'INPUT DATASET NAMES' in 3dcalc -help for details.
           Append [i] to get the node index column from
                      a niml formatted dataset.
           *  SUMA does not preserve the selection order 
              for any of the selectors.
              For example:
              dset[44,10..20] is the same as dset[10..20,44]


 SUMA mask options:
      -n_mask INDEXMASK: Apply operations to nodes listed in
                            INDEXMASK  only. INDEXMASK is a 1D file.
      -b_mask BINARYMASK: Similar to -n_mask, except that the BINARYMASK
                          1D file contains 1 for nodes to filter and
                          0 for nodes to be ignored.
                          The number of rows in filter_binary_mask must be
                          equal to the number of nodes forming the
                          surface.
      -c_mask EXPR: Masking based on the result of EXPR. 
                    Use like afni's -cmask options. 
                    See explanation in 3dmaskdump -help 
                    and examples in output of 3dVol2Surf -help
      NOTE: Unless stated otherwise, if n_mask, b_mask and c_mask 
            are used simultaneously, the resultant mask is the intersection
            (AND operation) of all masks.


   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

Examples:
1-   Plot a node's time series from a niml dataset:
     ConvertDset -input DemoSubj_EccCntavir.niml.dset'#5779#' \
                 -o_1D_stdout | 1dplot -nopush -stdin 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

    Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov    Thu Apr  8 16:15:02 EDT 2004




AFNI program: ConvertSurface

Usage:  ConvertSurface <-i_TYPE inSurf> <-o_TYPE outSurf> 
                       [<-sv SurfaceVolume [VolParam for sf surfaces]>] 
                       [-tlrc] [-MNI_rai/-MNI_lpi][-xmat_1D XMAT]
    reads in a surface and writes it out in another format.
    Note: This is a not a general utility conversion program. 
    Only fields pertinent to SUMA are preserved.
 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
    -ipar_TYPE ParentSurf specifies the parent surface. Only used
            when -o_fsp is used, see -o_TYPE options.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.

  Alternate GIFTI output qualifiers:
     You can alternately set gifti data arrays encoding with:
        -xml_ascii: For ASCII  (human readable)
        -xml_b64:   For Base64 (more compact)
        -xml_b64gz: For Base64 GZIPPED (most compact, needs gzip libraries)
     If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
     the default is -xml_ascii, otherwise it is -xml_b64

    -orient_out STR: Output coordinates in STR coordinate system. 
                      STR is a three character string following AFNI's 
                      naming convention. The program assumes that the   
                      native orientation of the surface is RAI, unless you 
                      use the -MNI_lpi option. The coordinate transformation
                      is carried out last, just before writing the surface 
                      to disk.
    -native: Write the output surface in the coordinate system native to its
             format.
             Option makes sense for BrainVoyager, Caret/SureFit and 
             FreeSurfer surfaces.
             But the implementation for Caret/Surefit is not finished yet 
             (ask if needed).
    -make_consistent: Check the consistency of the surface's mesh (triangle
                      winding). This option will write out a new surface 
                      even if the mesh was consistent.
                      See SurfQual -help for mesh checks.
    -radial_to_sphere rad: Push each node along the center-->node direction
                           until |center-->node| = rad.
    -acpc: Apply acpc transform (which must be in acpc version of 
        SurfaceVolume) to the surface vertex coordinates. 
        This option must be used with the -sv option.
    -tlrc: Apply Talairach transform (which must be a talairach version of 
        SurfaceVolume) to the surface vertex coordinates. 
        This option must be used with the -sv option.
    -MNI_rai/-MNI_lpi: Apply Andreas Meyer Lindenberg's transform to turn 
        AFNI tlrc coordinates (RAI) into MNI coord space 
        in RAI (with -MNI_rai) or LPI (with -MNI_lpi)).
        NOTE: -MNI_lpi option has not been tested yet (I have no data
        to test it on. Verify alignment with AFNI and please report
        any bugs.
        This option can be used without the -tlrc option.
        But that assumes that surface nodes are already in
        AFNI RAI tlrc coordinates .
   NOTE: The vertex coordinates coordinates of the input surfaces are only
         transformed if -sv option is used. If you do transform surfaces, 
         take care not to load them into SUMA with another -sv option.

    -patch2surf: Change a patch, defined here as a surface with a mesh that
                 uses only a subset of the full nodelist, to a surface
                 where all the nodes in nodelist are used in the mesh.
                 Note that node indices will no longer correspond between
                 the input patch and the output surface.

    Options for applying arbitrary affine transform:
    [xyz_new] = [Mr] * [xyz_old - cen] + D + cen
    -xmat_1D mat: Apply transformation specified in 1D file mat.1D.
                  to the surface's coordinates.
                  [mat] = [Mr][D] is of the form:
                  r11 r12 r13 D1
                  r21 r22 r23 D2
                  r31 r32 r33 D3
    -ixmat_1D mat: Same as xmat_1D except that mat is replaced by inv(mat)
    -xcenter x y z: Use vector cen = [x y z]' for rotation center.
                    Default is cen = [0 0 0]'
    -polar_decomp: Apply polar decomposition to mat and preserve
                   orthogonal component and shift only. 
                   For more information, see cat_matvec's -P option.
                   This option can only be used in conjunction with
                   -xmat_1D

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

		 Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov 	 Wed Jan  8 13:44:29 EST 2003 



AFNI program: ConvexHull

Usage: A program to find the convex hull of a set of points.
  This program is a wrapper for the Qhull program.
  see copyright notice by running suma -sources.

  ConvexHull  
     usage 1: < -input VOL >
              < -isoval V | -isorange V0 V1 | -isocmask MASK_COM >
              [<-xform XFORM>]
     usage 2: < i_TYPE input surface >
              [<-sv SURF_VOL>]
     usage 3: < -input_1D XYZ >
     common optional:
              [< -o_TYPE PREFIX>]
              [< -debug DBG >]

  Mandatory parameters, choose one of three usage modes:
  Usage 1:
     You must use one of the following two options:
     -input VOL: Input AFNI (or AFNI readable) volume.
     You must use one of the following iso* options:
     -isoval V: Create isosurface where volume = V
     -isorange V0 V1: Create isosurface where V0 <= volume < V1
     -isocmask MASK_COM: Create isosurface where MASK_COM != 0
        For example: -isocmask '-a VOL+orig -expr (1-bool(a-V))' 
        is equivalent to using -isoval V. 
     NOTE: -isorange and -isocmask are only allowed with -xform mask
            See -xform below for details.

  Usage 2:
     -i_TYPE SURF:  Use the nodes of a surface model
                    for input. See help for i_TYPE usage
                    below.

  Usage 3:
     -input_1D XYZ: Construct the convex hull of the points
                    contained in 1D file XYZ. If the file has
                    more than 3 columns, use AFNI's [] selectors
                    to specify the XYZ columns.

  Optional Parameters:
     Usage 1 only:
     -xform XFORM:  Transform to apply to volume values
                    before searching for sign change
                    boundary. XFORM can be one of:
            mask: values that meet the iso* conditions
                  are set to 1. All other values are set
                  to -1. This is the default XFORM.
            shift: subtract V from the dataset and then 
                   search for 0 isosurface. This has the
                   effect of constructing the V isosurface
                   if your dataset has a continuum of values.
                   This option can only be used with -isoval V.
            none: apply no transforms. This assumes that
                  your volume has a continuum of values 
                  from negative to positive and that you
                  are seeking to 0 isosurface.
                  This option can only be used with -isoval 0.
     Usage 2 only:
     -sv SURF_VOL: Specify a surface volume which contains
                   a transform to apply to the surface node
                   coordinates prior to constructing the 
                   convex hull.
     All Usage:
     -o_TYPE PREFIX: prefix of output surface.
        where TYPE specifies the format of the surface
        and PREFIX is, well, the prefix.
        TYPE is one of: fs, 1d (or vec), sf, ply.
        Default is: -o_ply 

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.


   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: CreateIcosahedron

Usage: CreateIcosahedron [-rad r] [-rd recDepth] [-ld linDepth] 
                         [-ctr ctr] [-prefix fout] [-help]

   -rad r: size of icosahedron. (optional, default 100)

   -rd recDepth: recursive (binary) tesselation depth for icosahedron 
       (optional, default:3) 
       (recommended to approximate number of nodes in brain: 6
       let rd2 = 2 * recDepth
       Nvert = 2 + 10 * 2^rd2
       Ntri  = 20 * 2^rd2
       Nedge = 30 * 2^rd2

   -ld linDepth: number of edge divides for linear icosahedron tesselation
       (optional, default uses binary tesselation).
       Nvert = 2 + 10 * linDepth^2
       Ntri  = 20 * linDepth^2
       Nedge = 30 * linDepth^2

   -nums: output the number of nodes (vertices), triangles, edges, 
          total volume and total area then quit

   -nums_quiet: same as -nums but less verbose. For the machine in you.

   -ctr ctr: coordinates of center of icosahedron. 
       (optional, default 0,0,0)

   -tosphere: project nodes to sphere.

   -prefix fout: prefix for output files. 
       (optional, default CreateIco)
                 The surface is written out in FreeSurfer's .asc
                 format by default. To change that, include a
                 valid extension to the prefix such as: fout.gii 

   -help: help message

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009


       Brenna D. Argall LBC/NIMH/NIH bargall@codon.nih.gov 
       Ziad S. Saad     SSC/NIMH/NIH saadz@mail.nih.gov



AFNI program: DTIStudioFibertoSegments
Usage: DTIStudioFibertoSegments [options] dataset
Convert a DTIStudio Fiber file to a SUMA segment file
Options:
  -output / -prefix = name of the output file (not an AFNI dataset prefix)
    the default output name will be rawxyzseg.dat
  -swap - swap bytes in data\n



AFNI program: Dimon

Dimon - monitor real-time acquisition of DICOM image files
    (or GEMS 5.x I-files, as 'Imon')

    This program is intended to be run during a scanning session
    on a scanner, to monitor the collection of image files.  The
    user will be notified of any missing slice or any slice that
    is acquired out of order.

    When collecting DICOM files, it is recommended to run this
    once per run, only because it is easier to specify the input
    file pattern for a single run (it may be very difficult to
    predict the form of input filenames runs that have not yet
    occurred.

    This program can also be used off-line (away from the scanner)
    to organize the files, run by run.  If the DICOM files have
    a correct DICOM 'image number' (0x0020 0013), then Dimon can
    use the information to organize the sequence of the files, 
    particularly when the alphabetization of the filenames does
    not match the sequencing of the slice positions.  This can be
    used in conjunction with the '-GERT_Reco' option, which will
    write a script that can be used to create AFNI datasets.

    See the '-dicom_org' option, under 'other options', below.

    If no -quit option is provided, the user should terminate the
    program when it is done collecting images according to the
    input file pattern.

    Dimon can be terminated using .

  ---------------------------------------------------------------
  realtime notes for running afni remotely:

    - The afni program must be started with the '-rt' option to
      invoke the realtime plugin functionality.

    - If afni is run remotely, then AFNI_TRUSTHOST will need to be
      set on the host running afni.  The value of that variable
      should be set to the IP address of the host running Dimon.
      This may set as an environment variable, or via the .afnirc
      startup file.

    - The typical default security on a Linux system will prevent
      Dimon from communicating with afni on the host running afni.
      The iptables firewall service on afni's host will need to be
      configured to accept the communication from the host running
      Dimon, or it (iptables) will need to be turned off.
  ---------------------------------------------------------------
  usage: Dimon [options] -infile_prefix PREFIX
     OR: Dimon [options] -infile_pattern "PATTERN"
     OR: Dimon [options] -infile_list FILES.txt

  ---------------------------------------------------------------
  examples:

  A. no real-time options:

    Dimon -infile_prefix   s8912345/i
    Dimon -infile_pattern 's8912345/i*'
    Dimon -infile_list     my_files.txt
    Dimon -help
    Dimon -infile_prefix   s8912345/i  -quit
    Dimon -infile_prefix   s8912345/i  -nt 120 -quit
    Dimon -infile_prefix   s8912345/i  -debug 2
    Dimon -infile_prefix   s8912345/i  -dicom_org -GERT_Reco -quit

  A2. investigate a list of files: 
    Dimon -infile_pattern '*' -dicom_org -show_sorted_list

  B. for GERT_Reco:

    Dimon -infile_prefix run_003/image -GERT_Reco -quit
    Dimon -infile_prefix run_003/image -dicom_org -GERT_Reco -quit
    Dimon -infile_prefix 'run_00[3-5]/image' -GERT_Reco -quit
    Dimon -infile_prefix anat/image -GERT_Reco -quit
    Dimon -infile_prefix epi_003/image -dicom_org -quit   \
          -GERT_Reco -gert_to3d_prefix run3 -gert_nz 42

  C. with real-time options:

    Dimon -infile_prefix s8912345/i -rt 

    Dimon -infile_pattern 's*/i*' -rt 
    Dimon -infile_pattern 's*/i*' -rt -nt 120
    Dimon -infile_pattern 's*/i*' -rt -quit

    ** detailed real-time example:

    Dimon                                    \
       -infile_pattern 's*/i*'               \
       -rt -nt 120                           \
       -host some.remote.computer            \
       -rt_cmd "PREFIX 2005_0513_run3"     \
       -num_slices 32                        \
       -sleep_frac 1.1                       \
       -quit                                 

    This example scans data starting from directory 003, expects
    120 repetitions (TRs), and invokes the real-time processing,
    sending data to a computer called some.remote.computer.name
    (where afni is running, and which considers THIS computer to
    be trusted - see the AFNI_TRUSTHOST environment variable).
    The time to wait for new data is 1.1*TR, and 32 slices are
    required for a volume

    Note that -num_slices can be important in a real-time setup,
    as scanners do not always write the slices in order.   Slices
    from volume #1 can appear on disk before all slices from volume
    #0, in which case Dimon might determine an incorrect number of
    slices per volume.

  -------------------------------------------
    Multiple DRIVE_AFNI commands are passed through '-drive_afni'
    options, one requesting to open an axial image window, and
    another requesting an axial graph, with 160 data points.

    Also, '-drive_wait' options may be used like '-drive_afni',
    except that the real-time plugin will wait until the first new
    volume is processed before executing those DRIVE_AFNI commands.
    One advantage of this is opening an image window for a dataset
    _after_ it is loaded, allowing afni to approriately set the
    window size.

    See README.driver for acceptable DRIVE_AFNI commands.

    Also, multiple commands specific to the real-time plugin are
    passed via '-rt_cmd' options.  The PREFIX command sets the
    prefix for the datasets output by afni.  The GRAPH_XRANGE and
    GRAPH_YRANGE commands set the graph dimensions for the 3D
    motion correction graph (only).  And the GRAPH_EXPR command
    is used to replace the 6 default motion correction graphs with
    a single graph, according to the given expression, the square
    root of the average squared entry of the 3 rotation params,
    roll, pitch and yaw, ignoring the 3 shift parameters, dx, dy
    and dz.

    See README.realtime for acceptable DRIVE_AFNI commands.

  example D (drive_afni):

    Dimon                                                   \
       -infile_pattern 's*/i*.dcm'                         \
       -nt 160                                             \
       -rt                                                 \
       -host some.remote.computer.name                     \
       -drive_afni 'OPEN_WINDOW axialimage'                \
       -drive_afni 'OPEN_WINDOW axialgraph pinnum=160'     \
       -rt_cmd 'PREFIX eat.more.cheese'                    \
       -rt_cmd 'GRAPH_XRANGE 160'                          \
       -rt_cmd 'GRAPH_YRANGE 1.02'                         \
       -rt_cmd 'GRAPH_EXPR sqrt(d*d+e*e+f*f)'

  -------------------------------------------

  example E (drive_wait):

    Close windows and re-open them after data has arrived.

    Dimon                                                    \
       -infile_prefix EPI_run1/8HRBRAIN                      \
       -rt                                                   \
       -drive_afni 'CLOSE_WINDOW axialimage'                 \
       -drive_afni 'CLOSE_WINDOW sagittalimage'              \
       -drive_wait 'OPEN_WINDOW axialimage geom=+20+20'      \
       -drive_wait 'OPEN_WINDOW sagittalimage geom=+520+20'  \
       -rt_cmd 'PREFIX brie.would.be.good'                   \

  -------------------------------------------
  example F (for testing complete real-time system):

    Use Dimon to send volumes to afni's real-time plugin, simulating
    TR timing with Dimon's -pause option.  Motion parameters and ROI
    averages are then sent on to serial_helper (for subject feedback),
    run in test mode (so no actual serial communication).
    
    a. Start afni in real-time mode, but first set some environment
       variables to make it explicit what might be set in the plugin.
       Not one of these variables is actually necessary, but they 
       make the process more scriptable.
    
       See Readme.environment for details on any variable.
    
           setenv AFNI_TRUSTHOST              localhost
           setenv AFNI_REALTIME_Registration  3D:_realtime
           setenv AFNI_REALTIME_Graph         Realtime
           setenv AFNI_REALTIME_MP_HOST_PORT  localhost:53214
           setenv AFNI_REALTIME_SEND_VER      YES
           setenv AFNI_REALTIME_SHOW_TIMES    YES
           setenv AFNI_REALTIME_Mask_Vals     ROI_means
    
           afni -rt
    
       Note: in order to send ROI averages per TR, the user must
             choose a mask in the real-time plugin.
    
    b. Start serial_helper in testing mode (i.e. get debug output
       and block serial output).
    
           serial_helper -no_serial -debug 3
    
    c. Run Dimon from the AFNI_data3 directory, in real-time mode,
       using a 2 second pause to simulate the TR.  Dicom images are
       under EPI_run1, and the files start with 8HRBRAIN.
    
           Dimon -rt -pause 2000 -infile_prefix EPI_run1/8HRBRAIN
    
       Note that Dimon can be run many times at this point.

    ------------------------------

    c2. alternately, set some env vars via Dimon

         Dimon -rt -pause 2000 -infile_prefix EPI_run1/8          \
           -drive_afni 'SETENV AFNI_REALTIME_Mask_Vals=ROI_means' \
           -drive_afni 'SETENV AFNI_REALTIME_SEND_VER=Yes'        \
           -drive_afni 'SETENV AFNI_REALTIME_SHOW_TIMES=Yes'

       Note that plugout_drive can also be used to set vars at
       run-time, though plugouts must be enabled to use it.

  ---------------------------------------------------------------
  notes:

    - Once started, unless the '-quit' option is used, this
      program exits only when a fatal error occurs (single
      missing or out of order slices are not considered fatal).
      Otherwise, it keeps waiting for new data to arrive.

      With the '-quit' option, the program will terminate once
      there is a significant (~2 TR) pause in acquisition.

    - To terminate this program, use .

  ---------------------------------------------------------------
  main options:

    For DICOM images, either -infile_pattern or -infile_prefix
    is required.

    -infile_pattern PATTERN : specify pattern for input files

        e.g. -infile_pattern 'run1/i*.dcm'

        This option is used to specify a wildcard pattern matching
        the names of the input DICOM files.  These files should be
        sorted in the order that they are to be assembled, i.e.
        when the files are sorted alphabetically, they should be
        sequential slices in a volume, and the volumes should then
        progress over time (as with the 'to3d' program).

        The pattern for this option must be within quotes, because
        it will be up to the program to search for new files (that
        match the pattern), not the shell.

    -infile_prefix PREFIX   : specify prefix matching input files

        e.g. -infile_prefix run1/i

        This option is similar to -infile_pattern.  By providing
        only a prefix, the user need not use wildcard characters
        with quotes.  Using PREFIX with -infile_prefix is
        equivalent to using 'PREFIX*' with -infile_pattern (note
        the needed quotes).

        Note that it may not be a good idea to use, say 'run1/'
        for the prefix, as there might be a readme file under
        that directory.

        Note also that it is necessary to provide a '/' at the
        end, if the prefix is a directory (e.g. use run1/ instead
        of simply run1).

    -infile_list MY_FILES.txt : filenames are in MY_FILES.txt

        e.g. -infile_list subject_17_files

        If the user would rather specify a list of DICOM files to
        read, those files can be enumerated in a text file, the
        name of which would be passed to the program.

  ---------------------------------------------------------------
  real-time options:

    -rt                : specify to use the real-time facility

        With this option, the user tells 'Dimon' to use the real-time
        facility, passing each volume of images to an existing
        afni process on some machine (as specified by the '-host'
        option).  Whenever a new volume is acquired, it will be
        sent to the afni program for immediate update.

        Note that afni must also be started with the '-rt' option
        to make use of this.

        Note also that the '-host HOSTNAME' option is not required
        if afni is running on the same machine.

    -drive_afni CMND   : send 'drive afni' command, CMND

        e.g.  -drive_afni 'OPEN_WINDOW axialimage'

        This option is used to pass a single DRIVE_AFNI command
        to afni.  For example, 'OPEN_WINDOW axialimage' will open
        such an axial view window on the afni controller.

        Note: the command 'CMND' must be given in quotes, so that
              the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.driver for more details.

    -drive_wait CMND   : send delayed 'drive afni' command, CMND

        e.g.  -drive_wait 'OPEN_WINDOW axialimage'

        This option is used to pass a single DRIVE_AFNI command
        to afni.  For example, 'OPEN_WINDOW axialimage' will open
        such an axial view window on the afni controller.

        This has the same effect as '-drive_afni', except that
        the real-time plugin will wait until the next completed
        volume to execute the command.

        An example of where this is useful is so that afni 'knows'
        about a new dataset before opening the given image window,
        allowing afni to size the window appropriately.

    -host HOSTNAME     : specify the host for afni communication

        e.g.  -host mycomputer.dot.my.network
        e.g.  -host 127.0.0.127
        e.g.  -host mycomputer
        the default host is 'localhost'

        The specified HOSTNAME represents the machine that is
        running afni.  Images will be sent to afni on this machine
        during the execution of 'Dimon'.

        Note that the environment variable AFNI_TRUSTHOST must be
        set on the machine running afni.  Set this equal to the
        name of the machine running Imon (so that afni knows to
        accept the data from the sending machine).

    -pause TIME_IN_MS : pause after each new volume

        e.g.  -pause 200

        In some cases, the user may wish to slow down a real-time
        process.  This option will cause a delay of TIME_IN_MS
        milliseconds after each volume is found.

    -rev_byte_order   : pass the reverse of the BYTEORDER to afni

        Reverse the byte order that is given to afni.  In case the
        detected byte order is not what is desired, this option
        can be used to reverse it.

        See the (obsolete) '-swap' option for more details.

    -rt_cmd COMMAND   : send COMMAND(s) to realtime plugin

        e.g.  -rt_cmd 'GRAPH_XRANGE 120'
        e.g.  -rt_cmd 'GRAPH_XRANGE 120 \n GRAPH_YRANGE 2.5'

        This option is used to pass commands to the realtime
        plugin.  For example, 'GRAPH_XRANGE 120' will set the
        x-scale of the motion graph window to 120 (repetitions).

        Note: the command 'COMMAND' must be given in quotes, so
        that the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.realtime for more details.

    -show_sorted_list  : display -dicom_org info and quit

        After the -dicom_org has taken effect, display the list
        of run index, image index and filenames that results.
        This option can be used as a simple review of the files
        under some directory tree, say.

        See the -show_sorted_list example under example A2.

    -sleep_init MS    : time to sleep between initial data checks

        e.g.  -sleep_init 500

        While Dimon searches for the first volume, it checks for
        files, pauses, checks, pauses, etc., until some are found.
        By default, the pause is approximately 3000 ms.

        This option, given in milliseconds, will override that
        default time.

        A small time makes the program seem more responsive.  But
        if the time is too small, and no new files are seen on
        successive checks, Dimon may think the first volume is
        complete (with too few slices).

        If the minimum time it takes for the scanner to output
        more slices is T, then 1/2 T is a reasonable -sleep_init
        time.  Note: that minimum T had better be reliable.

        The example shows a sleep time of half of a second.

    -sleep_vol MS     : time to sleep between volume checks

        e.g.  -sleep_vol 1000

        When Dimon finds some volumes and there still seems to be
        more to acquire, it sleeps for a while (and outputs '.').
        This option can be used to specify the amount of time it
        sleeps before checking again.  The default is 1.5*TR.

        The example shows a sleep time of one second.

    -sleep_frac FRAC  : new data search, fraction of TR to sleep

        e.g.  -sleep_frac 0.5

        When Dimon finds some volumes and there still seems to be
        more to acquire, it sleeps for a while (and outputs '.').
        This option can be used to specify the amount of time it
        sleeps before checking again, as a fraction of the TR.
        The default is 1.5 (as the fraction).

        The example shows a sleep time of one half of a TR.

    -swap  (obsolete) : swap data bytes before sending to afni

        Since afni may be running on a different machine, the byte
        order may differ there.  This option will force the bytes
        to be reversed, before sending the data to afni.

        ** As of version 3.0, this option should not be necessary.
           'Dimon' detects the byte order of the image data, and then
           passes that information to afni.  The realtime plugin
           will (now) decide whether to swap bytes in the viewer.

           If for some reason the user wishes to reverse the order
           from what is detected, '-rev_byte_order' can be used.

    -zorder ORDER     : slice order over time

        e.g. -zorder alt
        e.g. -zorder seq
        the default is 'alt'

        This options allows the user to alter the slice
        acquisition order in real-time mode, similar to the slice
        pattern of the '-sp' option.  The main differences are:
            o  only two choices are presently available
            o  the syntax is intentionally different (from that
               of 'to3d' or the '-sp' option)

        ORDER values:
            alt   : alternating in the Z direction (over time)
            seq   : sequential in the Z direction (over time)

  ---------------------------------------------------------------
  other options:

    -debug LEVEL       : show debug information during execution

        e.g.  -debug 2
        the default level is 1, the domain is [0,3]
        the '-quiet' option is equivalent to '-debug 0'

    -dicom_org         : organize files before other processing

        e.g.  -dicom_org

        When this flag is set, the program will attempt to read in
        all files subject to -infile_prefix or -infile_pattern,
        determine which are DICOM image files, and organize them
        into an ordered list of files per run.

        This may be necessary since the alphabetized list of files
        will not always match the sequential slice and time order
        (which means, for instance, that '*.dcm' may not list
        files in the correct order.

        In this case, if the DICOM files contain a valid 'image
        number' field (0x0020 0013), then they will be sorted
        before any further processing is done.

        Notes:

        - This does not work in real-time mode, since the files
          must all be organized before processing begins.

        - The DICOM images need valid 'image number' fields for
          organization to be possible (DICOM field 0x0020 0013).

        - This works will in conjunction with '-GERT_Reco', to
          create a script to make AFNI datasets.  There will be
          a single file per run that contains the image filenames
          for that run (in order).  This is fed to 'to3d'.

        - This may be used with '-save_file_list', to store the
          list of sorted filenames in an output file.

        - The images can be sorted in reverse order using the
          option, -rev_org_dir.

    -epsilon EPSILON   : specify EPSILON for 'equality' tests

        e.g.  -epsilon 0.05
        the default is 0.01

        When checking z-coordinates or differences between them
        for 'equality', a check of (difference < EPSILON) is used.
        This option lets the user specify that cutoff value.

    -help              : show this help information

    -hist              : display a history of program changes

    -nice INCREMENT    : adjust the nice value for the process

        e.g.  -nice 10
        the default is 0, and the maximum is 20
        a superuser may use down to the minimum of -19

        A positive INCREMENT to the nice value of a process will
        lower its priority, allowing other processes more CPU
        time.

    -nt VOLUMES_PER_RUN : set the number of time points per run

        e.g.  -nt 120

        With this option, if a run stalls before the specified
        VOLUMES_PER_RUN is reached (notably including the first
        run), the user will be notified.

        Without this option, Dimon will compute the expected number
        of time points per run based on the first run (and will
        allow the value to increase based on subsequent runs).
        Therefore Dimon would not detect a stalled first run.

    -num_slices SLICES  : slices per volume must match this

        e.g.  -num_slices 34

        Setting this puts a restriction on the first volume
        search, requiring the number of slices found to match.

        This prevents odd failures at the scanner, which does not
        necessarily write out all files for the first volume
        before writing some file from the second.

    -quiet             : show only errors and final information

    -quit              : quit when there is no new data

        With this option, the program will terminate once a delay
        in new data occurs.  This is most appropriate to use when
        the image files have already been collected.

    -rev_org_dir       : reverse the sort in dicom_org

        e.g.  -rev_org_dir

        With the -dicom_org option, the program will attempt to
        organize the DICOM files with respect to run and image
        numbers.  Normally that is an ascending sort.  With this
        option, the sort is reversed.

        see also: -dicom_org

    -rev_sort_dir      : reverse the alphabetical sort on names

        e.g.  -rev_sort_dir

        With this option, the program will sort the input files
        in descending order, as opposed to ascending order.

    -save_file_list FILENAME : store the list of sorted files

        e.g.  -save_file_list dicom_file_list

        With this option the program will store the list of files,
        sorted via -dicom_org, in the output file, FILENAME.  The
        user may wish to have a separate list of the files.

        Note: this option requires '-dicom_org'.

    -sort_by_num_suffix : sort files according to numerical suffix

        e.g.  -sort_by_num_suffix

        With this option, the program will sort the input files
        according to the trailing '.NUMBER' in the filename.  This
        NUMBER will be evaluated as a positive integer, not via
        an alphabetic sort (so numbers need not be zero-padded).

        This is intended for use on interleaved files, which are
        properly enumerated, but only in the filename suffix.
        Consider a set of names for a single, interleaved volume:

          im001.1  im002.3  im003.5  im004.7  im005.9  im006.11
          im007.2  im008.4  im009.6  im010.8  im011.10

        Here the images were named by 'time' of acquisition, and
        were interleaved.  So an alphabetic sort is not along the
        slice position (z-order).  However the slice ordering was
        encoded in the suffix of the filenames.

        NOTE: the suffix numbers must be unique

    -start_file S_FILE : have Dimon process starting at S_FILE

        e.g.  -start_file 043/I.901

        With this option, any earlier I-files will be ignored
        by Dimon.  This is a good way to start processing a later
        run, if it desired not to look at the earlier data.

        In this example, all files in directories 003 and 023
        would be ignored, along with everything in 043 up through
        I.900.  So 043/I.901 might be the first file in run 2.

    -tr TR             : specify the TR, in seconds

        e.g.  -tr 5.0

        In the case where volumes are acquired in clusters, the TR
        is different than the time needed to acquire one volume.
        But some scanners incorrectly store the latter time in the
        TR field.
        
        This option allows the user to override what is found in
        the image files, which is particularly useul in real-time
        mode, though is also important to have stored properly in
        the final EPI datasets.

        Here, TR is in seconds.

    -use_imon          : revert to Imon functionality

    -version           : show the version information

  ---------------------------------------------------------------
  GERT_Reco options:

    -GERT_Reco        : output a GERT_Reco_dicom script

        Create a script called 'GERT_Reco_dicom', similar to the
        one that Ifile creates.  This script may be run to create
        the AFNI datasets corresponding to the I-files.

    -gert_filename FILENAME : save GERT_Reco as FILENAME

        e.g. -gert_filename gert_reco_anat

        This option can be used to specify the name of the script,
        as opposed to using GERT_Reco_dicom.

        By default, if the script is generated for a single run,
        it will be named GERT_Reco_dicom_NNN, where 'NNN' is the
        run number found in the image files.  If it is generated
        for multiple runs, then the default it to name it simply
        GERT_Reco_dicom.

    -gert_nz NZ        : specify the number of slices in a mosaic

        e.g. -gert_nz 42

        Dimon happens to be able to write valid to3d commands
        for mosaic (volume) data, even though it is intended for
        slices.  In the case of mosaics, the user must specify the
        number of slices in an image file, or any GERT_Reco script
        will specify nz as 1.

    -gert_outdir OUTPUT_DIR  : set output directory in GERT_Reco

        e.g. -gert_outdir subject_A7
        e.g. -od subject_A7
        the default is '-gert_outdir .'

        This will add '-od OUTPUT_DIR' to the @RenamePanga command
        in the GERT_Reco script, creating new datasets in the
        OUTPUT_DIR directory, instead of the 'afni' directory.

    -sp SLICE_PATTERN  : set output slice pattern in GERT_Reco

        e.g. -sp alt-z
        the default is 'alt+z'

        This options allows the user to alter the slice
        acquisition pattern in the GERT_Reco script.

        See 'to3d -help' for more information.

    -gert_to3d_prefix PREFIX : set to3d PREFIX in output script

        e.g. -gert_to3d_prefix anatomy

        When creating a GERT_Reco script that calls 'to3d', this
        option will be applied to '-prefix'.

        The default prefix is 'OutBrick_run_NNN', where NNN is the
        run number found in the images.

      * Caution: this option should only be used when the output
        is for a single run.

  ---------------------------------------------------------------

  Author: R. Reynolds - version 2.17 (Nov 24, 2008)




AFNI program: DriveSuma

Usage: A program to drive suma from command line.
       DriveSuma [options] -com COM1 -com COM2 ...
Mandatory parameters:
---------------------
   -com COM: Command to be sent to SUMA.
             At least one command must be used
             and various commands can follow in
             succession.
        COM is the command string and consists
            of at least an action ACT. Some actions
            require additional parameters to follow
            ACT. 
 Actions (ACT) and their parameters:
 -----------------------------------
 o pause [MSG]: Pauses DriveSuma and awaits
                an 'Enter' to proceed with
                other commands. 
                MSG is an optional collection of
                strings that can be displayed as
                a prompt to the user. See usage
                in examples below.

 o sleep DUR: Put DriveSuma to sleep for a duration DUR.
              DUR is the duration, specified with something
              like 2s (or 2) or 150ms
              See usage in examples below.

 o show_surf: Send surface to SUMA.
     + Mandatory parameters for show_surf action:
        -surf_label LABEL: A label (identifier) to assign to the
                           surface
        -i_TYPE SURF: Name of surface file, see surface I/O 
                      options below for details.
     + Optional parameters for show_surf action:
          -surf_state STATE: Name the state of that surface
          -surf_winding WIND: Winding of triangles. Choose 
                              from ccw or cw (normals on sphere
                              pointing in). This option affects
                              the lighting of the surface.
     + Example show_surf: 
        1- Create some surface
        2- Start SUMA
        3- Send new surface to SUMA
        ---------------------------
        CreateIcosahedron -rd 4
        suma -niml &
        echo 'Wait until suma is ready then proceed.'
        DriveSuma -com show_surf -label icoco \
                       -i_fs CreateIco_surf.asc

 o node_xyz: Assign new coordinates to surface in SUMA
     + Mandatory parameters for action node_xyz:
        -surf_label LABEL: A label to identify the target 
                           surface
        -xyz_1D COORDS.1D: A 1D formatted file containing a new 
                           coordinate for each of the nodes 
                           forming the surface. COORDS.1D must 
                           have three columns.
                           Column selectors can be used here as 
                           they are in AFNI.
     + Example node_xyz (needs surface from 'Example show_surf')
        1- Create some variation on the coords of the surface
        2- Send new coordinates to SUMA
        3- Manipulate the x coordinate now
        4- Send new coordinates again to SUMA
        -------------------------------------
        ConvertSurface -i_fs CreateIco_surf.asc \
                       -o_1D radcoord radcoord \
                       -radial_to_sphere 100
        DriveSuma -com node_xyz -label icoco \
                       -xyz_1D radcoord.1D.coord'[0,1,2]'
        1deval -a radcoord.1D.coord'[0]' -expr 'sin(a)*100' \
            > xmess.1D ;1dcat xmess.1D radcoord.1D.coord'[1,2]' \
            > somecoord.1D.coord ; rm xmess.1D
        DriveSuma -com node_xyz -label icoco \
                       -xyz_1D somecoord.1D.coord

 o viewer_cont: Apply settings to viewer or viewer controller
     + Optional parameters for action viewer_cont:
       (Parameter names reflect GUI labels or key strokes.)
        -load_view VIEW_FILE: Load a previously
                              saved view file (.vvs).
                              Same as 'File-->Load View'
        -load_do   DO_FILE: Load a displayable object file
                            For detailed information on DO_FILE's format,
                            see the section under suma's  help (ctrl+h)
                            where the function of Ctrl+Alt+s is detailed.
        -key KEY_STRING: Act as if the key press KEY_STRING
                         was applied in the viewer.
                         ~ Not all key presses from interactive
                         more are allowed here.
                         ~ Available keys and their variants are:
                         b, m, n, p, r, t, z, up, down, left, right,
                         and F1 to F8.
                         ~ Key variants are specified this way:
                         ctrl+Up or ctrl+alt+Down etc.
                         ~ For help on key actions consult SUMA's
                         GUI help.
                         ~ Using multiple keys in the same command
                         might not result in the serial display of
                         the effect of each key, unless 'd' modifier
                         is used as shown further below. For example,
                         -key right -key right would most likely
                         produce one image rotated twice rather than
                         two images, each turned right once.
           The -key string can be followed by modifiers:
              For example, -key:r5:s0.2 has two modifiers,
              r5 and s0.2. All modifiers are separated by ':'.
              'r' Repeat parameter, so r5 would repeat the 
                  same key 5 times.
              's' Sleep parameter, so s0.2 would sleep for 0.2
                  seconds between repeated keys.
              'd' Immediate redisplay flag. That is useful
                  when you are performing a succession of keys and
                  want to ensure each individual one gets displayed
                  and recorded (most likely). Otherwise, successive
                  keys may only display their resultant. 'd' is used
                  automatically with 's' modifier.
              'p' Pause flag. Requires user intervention to proceed.
        -viewer VIEWER: Specify which viewer should be acted 
                        upon. Default is viewer 'A'. Viewers
                        must be created first (ctrl+n) before
                        they can be acted upon.
                        You can also refer to viewers with integers
                        0 for A, 1 for B, etc.
        -viewer_width or (-width) WIDTH: Set the width in pixels of
                                     the current viewer.
        -viewer_height or (-height) HEIGHT: Set the height in pixels of
                                     the current viewer.
        -viewer_size WIDTH HEIGHT : Convenient combo of -viewer_width 
                                    and -viewer_height
        -viewer_position X Y: Set position on the screen
     + Example viewer_cont (assumes all previous examples have
       been executed and suma is still running).
        - a series of commands that should be obvious.
       -------------------------------------
       DriveSuma -com  viewer_cont -key R -key ctrl+right
       DriveSuma -com  viewer_cont -key:r3:s0.3 up  \
                       -key:r2:p left -key:r5:d right \
                       -key:r3 z   -key:r5 left -key F6
       DriveSuma -com  viewer_cont -key m -key down \
                 -com  sleep 2s -com viewer_cont -key m \
                       -key:r4 Z   -key ctrl+right
       DriveSuma -com  viewer_cont -key m -key right \
                 -com  pause press enter to stop this misery \
                 -com  viewer_cont -key m 

 o recorder_cont: Apply commands to recorder window
     + Optional parameters for action recorder_cont:
       -anim_dup DUP: Save DUP copies of each frame into movie
                      This has the effect of slowing movies down
                      at the expense of file size, of course.
                      DUP's default is set by the value of AFNI_ANIM_DUP
                      environment variable. 
                      To set DUP back to its default value, use -anum_dup 0.
       -save_as PREFIX.EXT: Save image(s) in recorder
                             in the format determined by
                             extension EXT.
                             Allowed extensions are:
                             agif or gif: Animated GIF (movie)
                             mpeg or mpg: MPEG (movie)
                             jpeg or jpg: JPEG (stills)
                             png: PNG (stills)
       -save_index IND: Save one image indexed IND (start at 0)
       -save_range FROM TO: Save images from FROM to TO 
       -save_last: Save last image (default for still formats)
       -save_last_n N: Save last N images
       -save_all: Save all images (default for movie formats)
       -cwd ABSPATH: Set ABSPATH as SUMA's working directory. 
                     This path is used for storing output files
                     or loading dsets.
     + Example recorder_cont (assumes there is a recorder window)
       currently open from SUMA.
       -------------------------------------
       DriveSuma -com  recorder_cont -save_as allanimgif.agif \
                 -com  recorder_cont -save_as lastone.jpg \
                 -com  recorder_cont -save_as three.jpg -save_index 3\
                 -com  recorder_cont -save_as some.png -save_range 3 6

 o surf_cont: Apply settings to surface controller.
     + Optional parameters for action surf_cont:
       (Parameter names reflect GUI labels.)
       -surf_label LABEL: A label to identify the target surface
       -load_dset DSET: Load a dataset
           ! NOTE: When using -load_dset you can follow it
                   with -surf_label in order to attach
                   the dataset to a particular target surface.
       -load_col COL: Load a colorfile named COL.
                      Similar to what one loads under
                      SUMA-->ctrl+s-->Load Col
                      COL contains 4 columns, of
                      the following format:
                      n r g b
                      where n is the node index and 
                      r g b are thre flooat values between 0 and 1
                      specifying the color of each node.
       -view_surf_cont y/n: View surface controller
       -switch_surf LABEL: switch state to that of surface 
                           labeled LABEL and make that surface 
                           be in focus.
       -switch_dset DSET: switch dataset to DSET
       -view_dset y/n: Set view toggle button of DSET
       -1_only y/n: Set 1_only toggle button of DSET
       -switch_cmap CMAP: switch colormap to CMAP
       -load_cmap CMAP.1D.cmap: load and switch colormap in 
                                file CMAP.1D.cmap
       -I_sb ISB: Switch intensity to ISBth column (sub-brick)
       -I_range IR0 IR1: set intensity range from IR0 to IR1.
                         If only one number is given, the range
                         is symmetric from -|IR0| to |IR0|.
       -T_sb TSB: Switch threshold to TSBth column (sub-brick)
                  Set TSB to -1 to turn off thresholding.
       -T_val THR: Set threshold to THR
       -Dim DIM: Set the dimming factor.
     + Example surf_cont (assumes all previous examples have
       been executed and suma is still running).
       - Obvious chicaneries to follow:
       --------------------------------
       echo 1 0 0 > bbr.1D.cmap; echo 1 1 1 >> bbr.1D.cmap; \
       echo 0 0  1 >> bbr.1D.cmap
       IsoSurface -shape 4 128 -o_ply blooby.ply
       quickspec -spec blooby.spec -tn ply blooby.ply
       SurfaceMetrics -curv -spec blooby.spec \
                      -surf_A blooby -prefix blooby      
       DriveSuma -com show_surf -surf_label blooby \
                      -i_ply blooby.ply -surf_winding cw \
                      -surf_state la_blooby
       DriveSuma -com surf_cont -load_dset blooby.curv.1D.dset \
                      -surf_label blooby -view_surf_cont y
       DriveSuma -com surf_cont -I_sb 7 -T_sb 8 -T_val 0.0
       DriveSuma -com surf_cont -I_range 0.05 -T_sb -1
       DriveSuma -com surf_cont -I_sb 8 -I_range -0.1 0.1 \
                      -T_val 0.02 -Dim 0.4
       DriveSuma -com surf_cont -switch_dset Convexity -1_only y
       DriveSuma -com surf_cont -switch_cmap roi64 -1_only n
       DriveSuma -com surf_cont -view_dset n
       DriveSuma -com surf_cont -switch_dset blooby.curv.1D.dset \
                      -view_surf_cont n -I_range -0.05 0.14
       DriveSuma -com surf_cont -load_cmap bbr.1D.cmap

 o kill_suma: Close suma and quit.

Advice:
-------
   If you get a colormap in your recorded image, it is
   because the last thing you drew was the surface controller
   which has an openGL surface for a colormap. In such cases,
   Force a redisplay of the viewer with something like:
      -key:r2:d m 
                  where the m key is pressed twice (nothing)
                  changes in the setup but the surface is 
                  redisplayed nonetheless because of the 'd'
                  key option.
Options:
--------
   -examples: Show all the sample commands and exit
   -C_demo: execute a preset number of commands
            which are meant to illustrate how one
            can communicate with SUMA from one's 
            own C code. Naturally, you'll need to
            look at the source code file SUMA_DriveSuma.c
      Example:
      suma -niml &
      DriveSuma -C_demo

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: FSread_annot

Usage:  
  FSread_annot   <-input ANNOTFILE>  
                 [-col_1D annot.1D.col]  
                 [-roi_1D annot.1D.roi] 
                 [-cmap_1D annot.1D.cmap]
                 [show_FScmap]
                 [-help]  
  Reads a FreeSurfer annotaion file and outputs
  an equivalent ROI file and/or a colormap file 
  for use with SUMA.

  Required options:
     -input ANNOTFILE: Binary formatted FreeSurfer
                       annotation file.
     AND one of the optional options.
  Optional options:
     -col_1D annot.1D.col: Write a 4-column 1D color file. 
                           The first column is the node
                           index followed by r g b values.
                           This color file can be imported 
                           using the 'c' option in SUMA.
                           If no colormap was found in the
                           ANNOTFILE then the file has 2 columns
                           with the second being the annotation
                           value.
     -roi_1D annot.1D.roi: Write a 5-column 1D roi file.
                           The first column is the node
                           index, followed by its index in the
                           colormap, followed by r g b values.
                           This roi file can be imported 
                           using the 'Load' button in SUMA's
                           'Draw ROI' controller.
                           If no colormap was found in the
                           ANNOTFILE then the file has 2 columns
                           with the second being the annotation
                           value. 
     -cmap_1D annot.1D.cmap: Write a 4-column 1D color map file.
                             The first column is the color index,
                             followed by r g b and flag values.
                             The name of each color is inserted
                             as a comment because 1D files do not
                             support text data.
     -show_FScmap: Show the info of the colormap in the ANNOT file.


++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: Ifile

Usage: Ifile [Options]  

	[-nt]: Do not use time stamp to identify complete scans.
	       Complete scans are identified from 'User Variable 17'
	       in the image header.
	[-sp Pattern]: Slice acquisition pattern.
	               Sets the slice acquisition pattern.
	               The default option is alt+z.
	               See to3d -help for acceptable options.
	[-od Output_Directory]: Set the output directory in @RenamePanga.
	                        The default is afni .

	: Strings of wildcards defining series of
	              GE-Real Time (GERT) images to be assembled
	              as an afni brick. Example:
	              Ifile '*/I.*'
	          or  Ifile '083/I.*' '103/I.*' '123/I.*' '143/I.*'

	The program attempts to identify complete scans from the list
	of images supplied on command line and generates the commands
	necessary to turn them into AFNI bricks using the script @RenamePanga.
	If at least one complete scan is identified, a script file named GERT_Reco
	is created and executing it creates the afni bricks placed in the afni directory.

How does it work?
	With the -nt option: Ifile uses the variable 'User Variable 17' in the 
	I file's header. This option appears to be augmented each time a new
	scan is started. (Thanks to S. Marrett for discovering the elusive variable.)
	Without -nt option: Ifile first examines the modification time for each image and 
	infers from that which images form a single scan. Consecutive images that are less 
	than T seconds apart belong to the same scan. T is set based on the mean
	time delay difference between successive images. The threshold currently
	used works for the test data that we have. If it fails for your data, let us
	know and supply us with the data. Once a set of images is grouped into a 
	scan the sequence of slice location is analysed and duplicate, missing slices,
	and incomplete volumes are detected. Sets of images that do not pass these tests
	are ignored.

Preserving Time Info: (not necessary with -nt option but does not hurt to preserve anyway)
	It is important to preserve the file modification time info as you copy or untar
	the data. If you neglect to do so and fail to write down where each scan ends
	and/or begins, you might have a hell of a time reconstructing your data.
	When copying image directories, use  cp -rp ???/*  and when untaring 
	the archive, use  tar --atime-preserve -xf Archive.tar  on linux.
	On Sun and SGI, tar -xf Archive.tar preserves the time info.

Future Improvements:
	Out of justifiable laziness, and for other less convincing reasons, I have left 
	Ifile and @RenamePanga separate. They can be combined into one program but it's usage
	would become more complicated. At any rate, the user should not notice any difference
	since all they have to do is run the script GERT_reco that is created by Ifile.

	   Dec. 12/01 (Last modified July 24/02) SSCC/NIMH 
	Robert W. Cox(rwcox@nih.gov) and Ziad S. Saad (saadz@mail.nih.gov)




AFNI program: Imon

Imon - monitor real-time acquisition of I-files

    This program is intended to be run during a scanning session
    on a GE scanner, to monitor the collection of I-files.  The
    user will be notified of any missing slice or any slice that
    is acquired out of order.

    It is recommended that the user runs 'Imon' just after the
    scanner is first prepped, and then watches for error messages
    during the scanning session.  The user should terminate the
    program whey they are done with all runs.

    Note that 'Imon' can also be run separate from scanning, either
    to verify the integrity of I-files, or to create a GERT_Reco2
    script, which is used to create AFNI datasets.

    At the present time, the user must use  to terminate
    the program.

  ---------------------------------------------------------------
  usage: Imon [options] -start_dir DIR

  ---------------------------------------------------------------
  examples (no real-time options):

    Imon -start_dir 003
    Imon -help
    Imon -start_dir 003 -GERT_reco2 -quit
    Imon -start_dir 003 -nt 120 -start_file 043/I.901
    Imon -debug 2 -nice 10 -start_dir 003

  examples (with real-time options):

    Imon -start_dir 003 -rt
    Imon -start_dir 003 -rt -host pickle
    Imon -start_dir 003 -nt 120 -rt -host pickle

  ** detailed real-time example:

    This example scans data starting from directory 003, expects
    160 repetitions (TRs), and invokes the real-time processing,
    sending data to a computer called some.remote.computer.name
    (where afni is running, and which considers THIS computer to
    be trusted - see the AFNI_TRUSTHOST environment variable).

    Multiple DRIVE_AFNI commands are passed through '-drive_afni'
    options, one requesting to open an axial image window, and
    another requesting an axial graph, with 160 data points.

    See README.driver for acceptable DRIVE_AFNI commands.

    Also, multiple commands specific to the real-time plugin are
    passed via the '-rt_cmd' options.  The 'REFIX command sets the
    prefix for the datasets output by afni.  The GRAPH_XRANGE and
    GRAPH_YRANGE commands set the graph dimensions for the 3D
    motion correction graph (only).  And the GRAPH_EXPR command
    is used to replace the 6 default motion correction graphs with
    a single graph, according to the given expression, the square
    root of the average squared entry of the 3 rotation params,
    roll, pitch and yaw, ignoring the 3 shift parameters, dx, dy
    and dz.

    See README.realtime for acceptable DRIVE_AFNI commands.

    Imon                                                   \
       -start_dir 003                                      \
       -nt 160                                             \
       -rt                                                 \
       -host some.remote.computer.name                     \
       -drive_afni 'OPEN_WINDOW axialimage'                \
       -drive_afni 'OPEN_WINDOW axialgraph pinnum=160'     \
       -rt_cmd 'PREFIX eat.more.cheese'                    \
       -rt_cmd 'GRAPH_XRANGE 160'                          \
       -rt_cmd 'GRAPH_YRANGE 1.02'                         \
       -rt_cmd 'GRAPH_EXPR sqrt((d*d+e*e+f*f)/3)'            

  ---------------------------------------------------------------
  notes:

    - Once started, this program exits only when a fatal error
      occurs (single missing or out of order slices are not
      considered fatal).

      ** This has been modified.  The '-quit' option tells Imon
         to terminate once it runs out of new data to use.

    - To terminate this program, use .

  ---------------------------------------------------------------
  main option:

    -start_dir DIR     : (REQUIRED) specify starting directory

        e.g. -start_dir 003

        The starting directory, DIR, must be of the form 00n,
        where n is a digit.  The program then monitors all
        directories of the form ??n, created by the GE scanner.

        For instance, with the option '-start_dir 003', this
        program watches for new directories 003, 023, 043, etc.

  ---------------------------------------------------------------
  real-time options:

    -rt                : specify to use the real-time facility

        With this option, the user tells 'Imon' to use the real-time
        facility, passing each volume of images to an existing
        afni process on some machine (as specified by the '-host'
        option).  Whenever a new volume is acquired, it will be
        sent to the afni program for immediate update.

        Note that afni must also be started with the '-rt' option
        to make use of this.

        Note also that the '-host HOSTNAME' option is not required
        if afni is running on the same machine.

    -drive_afni CMND   : send 'drive afni' command, CMND

        e.g.  -drive_afni 'OPEN_WINDOW axialimage'

        This option is used to pass a single DRIVE_AFNI command
        to afni.  For example, 'OPEN_WINDOW axialimage' will open
        such an axial view window on the afni controller.

        Note: the command 'CMND' must be given in quotes, so that
              the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.driver for more details.

    -host HOSTNAME     : specify the host for afni communication

        e.g.  -host mycomputer.dot.my.network
        e.g.  -host 127.0.0.127
        e.g.  -host mycomputer
        the default host is 'localhost'

        The specified HOSTNAME represents the machine that is
        running afni.  Images will be sent to afni on this machine
        during the execution of 'Imon'.

        Note that the environment variable AFNI_TRUSTHOST must be
        set on the machine running afni.  Set this equal to the
        name of the machine running Imon (so that afni knows to
        accept the data from the sending machine).

    -rev_byte_order   : pass the reverse of the BYTEORDER to afni

        Reverse the byte order that is given to afni.  In case the
        detected byte order is not what is desired, this option
        can be used to reverse it.

        See the (obsolete) '-swap' option for more details.

    -rt_cmd COMMAND   : send COMMAND(s) to realtime plugin

        e.g.  -rt_cmd 'GRAPH_XRANGE 120'
        e.g.  -rt_cmd 'GRAPH_XRANGE 120 \n GRAPH_YRANGE 2.5'

        This option is used to pass commands to the realtime
        plugin.  For example, 'GRAPH_XRANGE 120' will set the
        x-scale of the motion graph window to 120 (repetitions).

        Note: the command 'COMMAND' must be given in quotes, so
        that the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.realtime for more details.

    -swap  (obsolete) : swap data bytes before sending to afni

        Since afni may be running on a different machine, the byte
        order may differ there.  This option will force the bytes
        to be reversed, before sending the data to afni.

        ** As of version 3.0, this option should not be necessary.
           'Imon' detects the byte order of the image data, and then
           passes that information to afni.  The realtime plugin
           will (now) decide whether to swap bytes in the viewer.

           If for some reason the user wishes to reverse the order
           from what is detected, '-rev_byte_order' can be used.

    -zorder ORDER     : slice order over time

        e.g. -zorder alt
        e.g. -zorder seq
        the default is 'alt'

        This options allows the user to alter the slice
        acquisition order in real-time mode, similar to the slice
        pattern of the '-sp' option.  The main differences are:
            o  only two choices are presently available
            o  the syntax is intentionally different (from that
               of 'to3d' or the '-sp' option)

        ORDER values:
            alt   : alternating in the Z direction (over time)
            seq   : sequential in the Z direction (over time)

  ---------------------------------------------------------------
  other options:

    -debug LEVEL       : show debug information during execution

        e.g.  -debug 2
        the default level is 1, the domain is [0,3]
        the '-quiet' option is equivalent to '-debug 0'

    -help              : show this help information

    -hist              : display a history of program changes

    -nice INCREMENT    : adjust the nice value for the process

        e.g.  -nice 10
        the default is 0, and the maximum is 20
        a superuser may use down to the minimum of -19

        A positive INCREMENT to the nice value of a process will
        lower its priority, allowing other processes more CPU
        time.

    -nt VOLUMES_PER_RUN : set the number of time points per run

        e.g.  -nt 120

        With this option, if a run stalls before the specified
        VOLUMES_PER_RUN is reached (notably including the first
        run), the user will be notified.

        Without this option, Imon will compute the expected number
        of time points per run based on the first run (and will
        allow the value to increase based on subsequent runs).
        Therefore Imon would not detect a stalled first run.

    -quiet             : show only errors and final information

    -quit              : quit when there is no new data

        With this option, the program will terminate once a delay
        in new data occurs.  This is most appropriate to use when
        the image files have already been collected.

    -start_file S_FILE : have Imon process starting at S_FILE

        e.g.  -start_file 043/I.901

        With this option, any earlier I-files will be ignored
        by Imon.  This is a good way to start processing a later
        run, if it desired not to look at the earlier data.

        In this example, all files in directories 003 and 023
        would be ignored, along with everything in 043 up through
        I.900.  So 043/I.901 might be the first file in run 2.

    -version           : show the version information

  ---------------------------------------------------------------
  GERT_Reco2 options:

    -GERT_Reco2        : output a GERT_Reco2 script

        Create a script called 'GERT_Reco2', similar to the one
        that Ifile creates.  This script may be run to create the
        AFNI datasets corresponding to the I-files.

    -gert_outdir OUTPUT_DIR  : set output directory in GERT_Reco2

        e.g. -gert_outdir subject_A7
        e.g. -od subject_A7
        the default is '-gert_outdir afni'

        This will add '-od OUTPUT_DIR' to the @RenamePanga command
        in the GERT_Reco2 script, creating new datasets in the
        OUTPUT_DIR directory, instead of the 'afni' directory.

    -sp SLICE_PATTERN  : set output slice pattern in GERT_Reco2

        e.g. -sp alt-z
        the default is 'alt+z'

        This options allows the user to alter the slice
        acquisition pattern in the GERT_Reco2 script.

        See 'to3d -help' for more information.

  ---------------------------------------------------------------

  Author: R. Reynolds - version 3.3a (March 22, 2005)

                        (many thanks to R. Birn)




AFNI program: IsoSurface

Usage: A program to perform isosurface extraction from a volume.
  Based on code by Thomas Lewiner (see below).

  IsoSurface  < -input VOL | -shape S GR >
              < -isoval V | -isorange V0 V1 | -isocmask MASK_COM >
              [< -o_TYPE PREFIX>]
              [< -debug DBG >]

  Mandatory parameters:
     You must use one of the following two options:
     -input VOL: Input volume.
     -shape S GR: Built in shape.
                  where S is the shape number, 
                  between 0 and 9 (see below). 
                  and GR is the grid size (like 64).
                  If you use -debug 1 with this option
                  a .1D volume called mc_shape*.1D is
                  written to disk. Watch the debug output
                  for a command suggesting how to turn
                  this 1D file into a BRIK volume for viewing
                  in AFNI.
     You must use one of the following iso* options:
     -isoval V: Create isosurface where volume = V
     -isorange V0 V1: Create isosurface where V0 <= volume < V1
     -isocmask MASK_COM: Create isosurface where MASK_COM != 0
        For example: -isocmask '-a VOL+orig -expr (1-bool(a-V))' 
        is equivalent to using -isoval V. 
     NOTE: -isorange and -isocmask are only allowed with -xform mask
            See -xform below for details.

  Optional Parameters:
     -xform XFORM:  Transform to apply to volume values
                    before searching for sign change
                    boundary. XFORM can be one of:
            mask: values that meet the iso* conditions
                  are set to 1. All other values are set
                  to -1. This is the default XFORM.
            shift: subtract V from the dataset and then 
                   search for 0 isosurface. This has the
                   effect of constructing the V isosurface
                   if your dataset has a continuum of values.
                   This option can only be used with -isoval V.
            none: apply no transforms. This assumes that
                  your volume has a continuum of values 
                  from negative to positive and that you
                  are seeking to 0 isosurface.
                  This option can only be used with -isoval 0.
     -o_TYPE PREFIX: prefix of output surface.
        where TYPE specifies the format of the surface
        and PREFIX is, well, the prefix.
        TYPE is one of: fs, 1d (or vec), sf, ply.
        Default is: -o_ply 

 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.


     -debug DBG: debug levels of 0 (default), 1, 2, 3.
        This is no Rick Reynolds debug, which is oft nicer
        than the results, but it will do.

  Built In Shapes:
     0: Cushin
     1: Sphere
     2: Plane
     3: Cassini
     4: Blooby
     5: Chair
     6: Cyclide
     7: 2 Torus
     8: mc case
     9: Drip

  NOTE:
  The code for the heart of this program is a translation of:
  Thomas Lewiner's C++ implementation of the algorithm in:
  Efficient Implementation of Marching Cubes´ Cases with Topological Guarantees
  by Thomas Lewiner, Hélio Lopes, Antônio Wilson Vieira and Geovan Tavares 
  in Journal of Graphics Tools. 
  http://www-sop.inria.fr/prisme/personnel/Thomas.Lewiner/JGT.pdf

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: MakeColorMap

Usage1: 
MakeColorMap <-fn Fiducials_Ncol> [-pos] [-ah prefix] [-h/-help]
    Creates a colormap of N colors that contains the fiducial colors.
    -fn Fiducials_Ncol: Fiducial colors and their indices in the color
                        map are listed in file Fiducials_Ncol.
       Each row contains 4 tab delimited values:
       R G B i
       R G B values are between 0 and 1 and represent the 
       i-th color in the colormap. i should be between 0 and
       N-1, N being the total number of colors in the colormap.

Usage2: 
MakeColorMap <-f Fiducials> <-nc N> [-sl] [-ah prefix] [-h/-help]
    Creates a colormap of N colors that contains the fiducial colors.
    -f Fiducials:  Fiducial colors are listed in an ascii file Fiducials. 
       Each row contains 3 tab delimited R G B values between 0 and 1.
    -nc N: Total number of colors in the color map.
    -sl: (optional, default is NO) if used, the last color in the Fiducial 
       list is omitted. This is useful in creating cyclical color maps.

Usage3: 
MakeColorMap <-std MapName>
    Returns one of SUMA's standard colormaps. Choose from:
    rgybr20, ngray20, gray20, bw20, bgyr19, 
    matlab_default_byr64, roi128, roi256, roi64
 or if the colormap is in a .pal file:  
MakeColorMap -cmapdb Palfile -cmap MapName

Usage4:
MakeColorMap <-fscolut lbl0 lbl1> 
             [<-fscolutfile FS_COL_LUT>]
   Create AFNI/SUMA colormaps of FreeSurfer colors
   indexed between lbl0 and lbl1. 
   -fscolut lbl0 lbl1: Get colors indexed between
                        lbl0 and lbl1, non existing
                        integer labels are given a 
                        gray color.
   -fscolutfile FS_COL_LUT: Use color LUT file FS_COL_LUT
                            Default is to use 
                            $FREESURFER_HOME/FreeSurferColorLUT.txt
   -show_fscolut: Show all of the LUT

Common options to all usages:
    -ah prefix: (optional, Afni Hex format.
                 default is RGB values in decimal form)
       use this option if you want a color map formatted to fit 
       in AFNI's .afnirc file. The colormap is written out as 
      prefix_01 = #xxxxxxx 
      prefix_02 = #xxxxxxx
       etc...
    -ahc prefix: optional, Afni Hex format, ready to go into.
                 pbardefs.h 
    -h or -help: displays this help message.
    -flipud: Flip the map upside down. If the colormap is being 
             created for interactive loading into SUMA with the 'New'
             button from the 'Surface Controller' you will need
             to flip it upside down. 

Example Usage 1: Creating a colormap of 20 colors that goes from 
Red to Green to Blue to Yellow to Red.

   The file FidCol_Nind contains the following:
   1 0 0 0
   0 1 0 5
   0 0 1 10
   1 1 0 15
   1 0 0 19

   The following command will generate the RGB colormap in decimal form:
   MakeColorMap -fn FidCol_Nind 

   The following command will generate the colormap and write it as 
   an AFNI color palette file:
   MakeColorMap -fn FidCol_Nind -ah TestPalette > TestPalette.pal

Example Usage 2: Creating a cyclical version of the colormap in usage 1:

   The file FidCol contains the following:
   1 0 0
   0 1 0
   0 0 1
   1 1 0
   1 0 0

   The following command will generate the RGB colormap in decimal form:
   MakeColorMap -f FidCol -sl -nc 20 

Example Usage 3: 
   MakeColorMap -std ngray20 

Example Usage 4: 
   MakeColorMap -fscolut 0 255

To read in a new colormap into AFNI, either paste the contents of 
TestPalette.pal in your .afnirc file or read the .pal file using 
AFNI as follows:
1- run afni
2- Define Function --> right click on Inten (over colorbar) 
   --> Read in palette (choose TestPalette.pal)
3- set the #colors chooser (below colorbar) to 20 (the number of colors in 
   TestPalette.pal).
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 
++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

    Ziad S. Saad & Rick R. Reynolds SSCC/NIMH/NIH saadz@mail.nih.gov    Tue Apr 23 14:14:48 EDT 2002




AFNI program: MapIcosahedron

Usage: MapIcosahedron <-spec specFile> 
                      [-rd recDepth] [-ld linDepth] 
                      [-morph morphSurf] 
                      [-it numIt] [-prefix fout] 
                      [-verb] [-help] [...]

Creates new versions of the original-mesh surfaces using the mesh
of an icosahedron. 

   -spec specFile: spec file containing original-mesh surfaces
        including the spherical and warped spherical surfaces.

   -rd recDepth: recursive (binary) tesselation depth for icosahedron.
        (optional, default:3) See CreateIcosahedron for more info.

   -ld linDepth: number of edge divides for linear icosahedron tesselation 
        (optional, default uses binary tesselation).
        See CreateIcosahedron -help for more info.

   *Note: Enter -1 for recDepth or linDepth to let program 
          choose a depth that best approximates the number of nodes in
          original-mesh surfaces.

   -morph morphSurf: 

        Old Usage:
        ----------
        State name of spherical surface to which icosahedron 
        is inflated. Typical example for FreeSurfer surfaces would be 
        'sphere.reg', and that's the default used by the program. 

        New Usage:
        ----------
        State name or filename of spherical surface to which icosahedron 
        is inflated. Typical example for FreeSurfer surfaces would be 
        'sphere.reg', and that's the default used by the program. 
        Searching is first done assuming a State name and if that does
        not return exactly one match, a search based on the filename
        is carried out.

   The following four options affect the geometric center and radius
   settings of morphSurf. In previous versions, the geometric center
   was set to the center of mass. A better estimate of the geometric
   center is now obtained and this might make standard-mesh surfaces
   less sensitive to distortions in the spherical surfaces.
   With this change, the coordinates of the nodes will be silghtly
   different from in previous versions. If you insist on the old 
   method, use the option -use_com below.
   ----------------------------------------------------------------
   -sphere_at_origin: Geometric center of morphSurf sphere is at 
                      0.0 0.0 0.0. This is usually the case but
                      if you do not know, let the program guess.

   -sphere_center cx cy cz: Geometric center of morphSurf sphere. 
                            If not specified, it will be estimated.
      Note: It is best to specify cx cy cz or use -sphere_at_origin
            when the center is known.

   -use_com: (ONLY for backward compatibility)
             Use this option to make the center of mass of morpSurf.
             be the geometric center estimate. This is not optimal,
             use this option only for backward compatibility.
             The new results, i.e. without -use_com, should always be
             better.

   -sphere_radius R: Radius of morphSurf sphere. If not specified,
                     this would be the average radius of morpSurf.
                     
   ----------------------------------------------------------------

   -it numIt: number of smoothing interations 
        (optional, default none).

   -prefix FOUT: prefix for output files.
        (optional, default 'std.')

   -morph_sphere_check: Do some quality checks on morphSurf and exit.
                        This option now replaces -sph_check and -sphreg_check
                        See output of SurfQual -help for more info on this
                        option's output.

**********************************************
-sph_check and -sphreg_check are now OBSOLETE. 

   [-sph_check]:(OBSOLETE, use -morph_sphere_check instead) 
                Run tests for checking the spherical surface (sphere.asc)
                The program exits after the checks.
                This option is for debugging FreeSurfer surfaces only.

   [-sphreg_check]: (OBSOLETE, use -morph_sphere_check instead)
                Run tests for checking the spherical surface (sphere.reg.asc)
                The program exits after the checks.
                This option is for debugging FreeSurfer surfaces only.

   -sph_check and -sphreg_check are mutually exclusive.

**********************************************

   -all_surfs_spec: When specified, includes original-mesh surfaces 
       and icosahedron in output spec file.
       (optional, default does not include original-mesh surfaces)
   -verb: verbose.
   -write_nodemap: (default) Write a file showing the mapping of each 
                   node in the icosahedron to the closest
                   three nodes in the original mesh.
                   The file is named by the prefix FOUT
                   suffixed by MI.1D
  NOTE: This option is useful for understanding what contributed
        to a node's position in the standard meshes (STD_M).
        Say a triangle on the  STD_M version of the white matter
        surface (STD_WM) looks fishy, such as being large and 
        obtuse compared to other triangles in STD_M. Right
        click on that triangle and get one of its nodes (Ns)
        search for Ns in column 0 of the MI.1D file. The three
        integers (N0, N1, N2) on the same row as Ns will point 
        to the three nodes on the original meshes (sphere.reg) 
        to which Ns (from the icosahedron) was mapped. Go to N1
        (or N0 or N2) on the original sphere.reg and examine the
        mesh there, which is best seen in mesh view mode ('p' button).
        It will most likely be the case that the sphere.reg mesh
        there would be highly distorted (quite compressed).
   -no_nodemap: Opposite of write_nodemap

NOTE 1: The algorithm used by this program is applicable
      to any surfaces warped to a spherical coordinate
      system. However for the moment, the interface for
      this algorithm only deals with FreeSurfer surfaces.
      This is only due to user demand and available test
      data. If you want to apply this algorithm using surfaces
      created by other programs such as SureFit and Caret, 
      Send saadz@mail.nih.gov a note and some test data.

NOTE 2: At times, the standard-mesh surfaces are visibly
      distorted in some locations from the original surfaces.
      So far, this has only occurred when original spherical 
      surfaces had topological errors in them. 
      See SurfQual -help and SUMA's online documentation 
      for more detail.

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009


          Brenna D. Argall LBC/NIMH/NIH  
(contact) Ziad S. Saad     SSC/NIMH/NIH saadz@mail.nih.gov





AFNI program: ROI2dataset

Usage: 
   ROI2dataset <-prefix dsetname> [...] <-input ROI1 ROI2 ...>
               [<-of ni_bi|ni_as|1D>] 
               [<-dom_par_id idcode>] 
    This program transforms a series of ROI files
    to a node dataset. This data set will contain
    the node indices in the first column and their
    ROI values in the second column.
    Duplicate node entries (nodes that are part of
    multiple ROIs) will get ignored. You will be
    notified when this occurs. 

Mandatory parameters:
    -prefix dsetname: Prefix of output dataset.
                      Program will not overwrite existing
                      datasets.
    -input ROI1 ROI2....: ROI files to turn into a 
                          data set. This parameter MUST
                          be the last one on command line.

Optional parameters:
(all optional parameters must be specified before the
 -input parameters.)
    -h | -help: This help message
    -of FORMAT: Output format of dataset. FORMAT is one of:
                ni_bi: NIML binary
                ni_as: NIML ascii (default)
                1D   : 1D AFNI format.
    -dom_par_id id: Idcode of domain parent.
                    When specified, only ROIs have the same
                    domain parent are included in the output.
                    If id is not specified then the first
                    domain parent encountered in the ROI list
                    is adopted as dom_par_id.
                    1D roi files do not have domain parent 
                    information. They will be added to the 
                    output data under the chosen dom_par_id.
    -pad_to_node max_index: Output a full dset from node 0 
                            to node max_index (a total of 
                            max_index + 1 nodes). Nodes that
                            are not part of any ROI will get
                            a default label of 0 unless you
                            specify your own padding label.
    -pad_label padding_label: Use padding_label (an integer) to
                            label nodes that do not belong
                            to any ROI. Default is 0.

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov 



AFNI program: ROIgrow

Usage: ROIgrow <-i_TYPE SURF> <-roi_nodes ROI.1D> <-lim LIM>
               [-prefix PREFIX]
       A program to expand an ROI on the surface.
       The roi is grown from each node by a user-determined
       distance (geodesic, measured along the mesh).

  Mandatory Parameters:
     -i_TYPE SURF: Specify input surface.
             You can also use -t* and -spec and -surf
             methods to input surfaces. See below
             for more details.
     -roi_labels ROI_LABELS: Data column containing
                             integer labels of ROIs.
                             Each integer label gets
                             grown separately.
                             If ROI_LABELS is in niml
                             format, then you need not
                             use -roi_nodes because node 
                             indices are stored with the 
                             labels.
        Notice: With this option, an output is created for
                each label. The output contains two columns:
                One with node indices and one with the label.
                When this option is not used, you get one
                column out containing node indices only.
     -full_list: Output a row for each node on the surface.
                 Nodes not in the grown ROI, receive a 0 for
                 a label. This option is ONLY for use with
                 -roi_labels. This way you can combine 
                 multiple grown ROIs with, say, 3dcalc.
                 For such operations, you are better off 
                 using powers of 2 for integer labels.
     -roi_nodes ROI_INDICES: Data column containing
                     node indices of ROI. 
                     Use the [] column
                     specifier if you have more than
                     one column in the data file.
                     To get node indices from a niml dset
                     use the '[i]' selector.
     -grow_from_edge: Grow ROIs from their edges rather than
                      the brute force default. This might 
                      make the program faster on large ROIs  
                      and large surfaces.
     -lim LIM: Distance to cover from each node.
               The units of LIM are those of the surface's
               node coordinates. Distances are calculated
               along the surface's mesh.
  Optional Parameters:
     -prefix PREFIX: Prefix of 1D output dataset.
                     Default is ROIgrow
 
 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: RSFgen
++ RSFgen: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
Sample program to generate random stimulus functions.                  
                                                                       
Usage:                                                                 
RSFgen                                                          
-nt n            n = length of time series                             
-num_stimts p    p = number of input stimuli (experimental conditions) 
[-nblock i k]    k = block length for stimulus i  (1<=i<=p)            
                     (default: k = 1)                                  
[-seed s]        s = random number seed                                
[-quiet]         flag to suppress screen output                        
[-one_file]      place stimulus functions into a single .1D file       
[-one_col]       write stimulus functions as a single column of decimal
                   integers (default: multiple columns of binary nos.) 
[-prefix pname]  pname = prefix for p output .1D stimulus functions    
                   e.g., pname1.1D, pname2.1D, ..., pnamep.1D          
                                                                       
The following Random Permutation, Markov Chain, and Input Table options
are mutually exclusive.                                                
                                                                       
Random Permutation options:                                            
-nreps i r       r = number of repetitions for stimulus i  (1<=i<=p)   
[-pseed s]       s = stim label permutation random number seed         
                                     p                                 
                 Note: Require n >= Sum (r[i] * k[i])                  
                                    i=1                                
                                                                       
Markov Chain options:                                                  
-markov mfile    mfile = file containing the transition prob. matrix   
[-pzero z]       probability of a zero (i.e., null) state              
                     (default: z = 0)                                  
                                                                       
Input Table row permutation options:                                   
[-table dfile]   dfile = filename of column or table of numbers        
                 Note: dfile may have a column selector attached       
                 Note: With this option, all other input options,      
                       except -seed and -prefix, are ignored           
                                                                       
                                                                       
Warning: This program will overwrite pre-existing .1D files            
                                                                       



AFNI program: SampBias

Usage:
  SampBias -spec SPECFILE -surf SURFNAME -plimit limit -dlimit limit -out FILE

  Mandatory parameters:
     -spec SpecFile: Spec file containing input surfaces.
     -surf SURFNAME: Name of input surface 
     -plimit limit: maximum length of path along surface in mm.
                    default is 50 mm
     -dlimit limit: maximum length of euclidean distance in mm.
                    default is 1000 mm
     -out FILE: output dataset


   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

 blame Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: ScaleToMap

Usage:  ScaleToMap <-input IntFile icol vcol>  
    [-cmap MapType] [-cmapfile Mapfile] [-cmapdb Palfile] [-frf] 
    [-clp/-perc_clp clp0 clp1] [-apr/-anr range]
    [-interp/-nointerp/-direct] [-msk msk0 msk1] [-nomsk_col]
    [-msk_col R G B] [-br BrightFact]
    [-h/-help] [-verb] [-showmap] [-showdb]

    -input IntFile icol vcol: input data.
       Infile: 1D formatted ascii file containing node values
       icol: index of node index column 
       (-1 if the node index is implicit)
       vcol: index of node value column.
       Example: -input ValOnly.1D -1 0 
       for a 1D file containing node values
       in the first column and no node indices.
       Example: -input NodeVal.1D 1 3
       for a 1D file containing node indices in
       the SECOND column and node values in the 
       FOURTH column (index counting begins at 0)
    -v and -iv options are now obsolete.
       Use -input option instead.
    -cmap MapName: (optional, default RGYBR20) 
       choose one of the standard colormaps available with SUMA:
       RGYBR20, BGYR19, BW20, GRAY20, MATLAB_DEF_BYR64, 
       ROI64, ROI128
       You can also use AFNI's default paned color maps:
       The maps are labeled according to the number of 
       panes and their sign. Example: afni_p10
       uses the positive 10-pane afni colormap.
       afni_n10 is the negative counterpart.
       These maps are meant to be used with
       the options -apr and -anr listed below.
       You can also load non-default AFNI colormaps
       from .pal files (AFNI's colormap format); see option
       -cmapdb below.
    -cmapdb Palfile: read color maps from AFNI .pal file
       In addition to the default paned AFNI colormaps, you
       can load colormaps from a .pal file.
       To access maps in the Palfile you must use the -cmap option
       with the label formed by the name of the palette, its sign
       and the number of panes. For example, to following palette:
       ***PALETTES deco [13]
       should be accessed with -cmap deco_n13
       ***PALETTES deco [13+]
       should be accessed with -cmap deco_p13
    -cmapfile Mapfile: read color map from Mapfile.
       Mapfile:1D formatted ascii file containing colormap.
               each row defines a color in one of two ways:
               R  G  B        or
               R  G  B  f     
       where R, G, B specify the red, green and blue values, 
       between 0 and 1 and f specifies the fraction of the range
       reached at this color. THINK values of right of AFNI colorbar.
       The use of fractions (it is optional) would allow you to create
       non-linear color maps where colors cover differing fractions of 
       the data range.
       Sample colormap with positive range only (a la AFNI):
               0  0  1  1.0
               0  1  0  0.8
               1  0  0  0.6
               1  1  0  0.4
               0  1  1  0.2
       Note the order in which the colors and fractions are specified.
       The bottom color of the +ve colormap should be at the bottom of the
       file and have the lowest +ve fraction. The fractions here define a
       a linear map so they are not necessary but they illustrate the format
       of the colormaps.
       Comparable colormap with negative range included:
               0  0  1   1.0
               0  1  0   0.6
               1  0  0   0.2
               1  1  0  -0.2
               0  1  1  -0.6
       The bottom color of the -ve colormap should have the 
       lowest -ve fraction. 
       You can use -1 -1 -1 for a color to indicate a no color
       (like the 'none' color in AFNI). Values mapped to this
       'no color' will be masked as with the -msk option.
       If your 1D color file has more than three or 4 columns,
       you can use the [] convention adopted by AFNI programs
       to select the columns you need.
    -frf: (optional) first row in file is the first color.
       As explained in the -cmapfile option above, the first 
       or bottom (indexed 0 )color of the colormap should be 
       at the bottom of the file. If the opposite is true, use
       the -frf option to signal that.
       This option is only useful with -cmapfile.
    -clp/-perc_clp clp0 clp1: (optional, default no clipping)
       clips values in IntVect. if -clp is used then values in vcol
       < clp0 are clipped to clp0 and > clp1 are clipped to clp1
       if -perc_clp is used them vcol is clipped to the values 
       corresponding to clp0 and clp1 percentile.
       The -clp/-prec_clp options are mutually exclusive with -apr/-anr.
    -apr range: (optional) clips the values in IntVect to [0 range].
       This option allows range of colormap to be set as in AFNI, 
       with Positive colorbar (Pos selected).
       This option is mutually exclusive with -clp/-perc_clp).
       set range = 0 for autoranging.
       If you use -apr and your colormap contains fractions, you
       must use a positive range colormap.
    -anr range: (optional) clips the values in IntVect to [-range range].
       This option allows range of colormap to be set as in AFNI, 
       with Negative colorbar (Pos NOT selected).
       This option is mutually exclusive with -clp/-perc_clp).
       set range = 0 for autoranging.
       If you use -anr and your colormap contains fractions, you
       must use a negative range colormap.
    -interp: (default) use color interpolation between colors in colormap
       If a value is assigned between two colors on the colorbar,
       it receives a color that is an interpolation between those two colors.
       This is the default behaviour in SUMA and AFNI when using the continuous
       colorscale. Mutually exclusive with -nointerp and -direct options.
    -nointerp: (optional) turns off color interpolation within the colormap
       Color assigniment is done a la AFNI when the paned colormaps are used.
       Mutually exclusive with -interp and -direct options.
    -direct: (optional) values (typecast to integers) are mapped directly
       to index of color in color maps. Example: value 4 is assigned
       to the 5th (index 4) color in the color map (same for values
       4.2 and 4.7). This mapping scheme is useful for ROI indexed type
       data. Negative data values are set to 0 and values >= N_col 
       (the number of colors in the colormap) are set to N_col -1
    -msk_zero: (optional) values that are 0 will get masked no matter
       what colormaps or mapping schemes you are using. 
       AFNI masks all zero values by default.
    -msk msk0 msk1: (optinal, default is no masking) 
       Values in vcol (BEFORE clipping is performed) 
       between [msk0 msk1] are masked by the masking color.
    -msk_col R G B: (optional, default is 0.3 0.3 0.3) 
       Sets the color of masked voxels.
    -nomsk_col: do not output nodes that got masked.
       It does not make sense to use this option with
       -msk_col.
    -br BrightFact: (optional, default is 1) 
       Applies a brightness factor to the colors 
       of the colormap and the mask color.
    -h or -help: displays this help message.

   The following options are for debugging and sanity checks.
    -verb: (optional) verbose mode.
    -showmap: (optional) print the colormap to the screen and quit.
       This option is for debugging and sanity checks.
       You can use MakeColorMap in Usage3 to write out a colormap
       in its RGB form.
    -showdb: (optional) print the colors and colormaps of AFNI
       along with any loaded from the file Palfile.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

    Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov 
      July 31/02 




AFNI program: SpharmDeco

Spherical Harmonics Decomposition of a surface's coordinates or data
Model:
Given a data vector 'd' defined over the domain of N nodes of surface 'S'
The weighted spherical harmonics representation of d (termed Sd) is given by:
            L    l  -l(l+1)s                         
    Sd = SUM  SUM  e         B     Y                 
          l=0 m=-l            l,m   l,m              
 where
 L: Largest degree of spherical harmonics
 Y    : Sperical harmonic of degree l and order m
  l,m
        Y is an (L+1 by N) complex matrix.
 B    : Coefficient associated with harmonic Y    
  l,m                                         l,m 
 s: Smoothing parameter ranging between 0 for no smoothing
    and 0.1 for the extreme smoothing. The larger s, the higher
    the attenuation of higher degree harmonics. 
    Small values of s (0.005) can be used to reduce Gibbs ringing artifacts.


Usage:
       SpharmDeco  <-i_TYPE S> -unit_sph UNIT_SPH_LABEL> <-l L>
                   [<-i_TYPE SD> ... | <-data D>] 
                   [-bases_prefix BASES] 
                   [<-prefix PREFIX>] [<-o_TYPE SDR> ...]
                   [-debug DBG]  [-sigma s]
  
Input: 
  -i_TYPE S: Unit sphere, isotopic to the surface domain over which the 
                    data to be decomposed is defined.
                    This surface is used to calculate the basis functions 
                    up to order L.
                    These basis functions are saved under 
                    the prefix BASES_PREFIX.
                    Note that this surface does not need to be of 
                    radius 1. 
  -unit_sph UNIT_SPH_LABEL: Provide the label of the unit sphere. 
                   If you do not do that, the program won't know 
                   which of the two -i_TYPE options specifies the 
                   unit sphere.
  -l L: Decomposition order
  One of:
     -i_TYPE SD: A surface that is isotopic to S and whose node coordinates 
                 provide three data vectors (X, Y, Z) to be decomposed
                 See help section on surface input to understand the
                 syntax of -i_TYPE
                 You can specify multiple surfaces to be processed by 
                 using repeated instances of -i_TYPE SD option. This is more
                 computationally efficient than doing each surface separately.    or 
     -data D: A dataset whose K columns are to be individually decomposed. 

  -bases_prefix BASES_PREFIX: If -unit_sph is used, this option save the
                              bases functions under the prefix BASES_PREFIX
                              Otherwise, if BASES_PREFIX exists on disk, the
                              program will reload them. This is intended to
                              speed up the program, however, in practice, 
                              this may not be the case.
                           Note that the bases are not reusable with a
                              different unit sphere. 
  -debug DBG: Debug levels (1-3)
  -sigma s: Smoothing parameter (0 .. 0.001) which weighs down the 
            contribution of higher order harmonics.
  -prefix PREFIX: Write out the reconstructed data into dataset PREFIX
                  and write the beta coefficients for each processed 
                  data column. Note that when you are using node 
                  coordinates form J surfaces, the output will be for 
                  3*J columns with the 1st triplet of columns for the first 
                  surface's X Y Z coordinates and the 2nd triplet for the
                  second surface's coordinates, etc.
  -o_TYPE SDR: Write out a new surface with reconstructed coordinates.
               This option is only valid if -i_TYPE SD is used.
               See help section on surface output to understand the
               syntax of -o_TYPE.
               If you specify multiple (M) SD surfaces, you will get M
               reconstructed surfaces out. They can be named in one of
               two ways depending on how many -o_TYPE options you use.
               If only one -o_TYPE is used, then M names are automatically
               generated by appending .sXX to SDR. Alternately, you can 
               name all the output surfaces by using M -o_TYPE options.

Output files:
  Harmonics of each order l are stored in a separate
     file with the order l in its name. For example for l = 3, the harmonics
     are stored in a file called  BASES_PREFIX.sph03.1D.
     In the simplest form, this file is in .1D format and contains an
     (l+1 x N) complex matrix. The real part constitutes the negative degree
     harmonics and the positive part contains the postive degree ones.
     (Internally, the complex matrix is turned into a real matrix of size 
      2l+1 x N )
  Beta coefficients are stored in one for each of the input K data columns.
     For example the beta coefficients for the data column 2 is called: 
     PREFIX.beta.col002.1D.dset. 
     The (l+1 x 2l+1) matrix in each file in real valued with each row 
     containing coefficients that for order l.
  Surface or data reconstruction files are named based on PREFIX. 

This program is based on Moo Chung's matlab implementation of spherical
  harmonics decomposition which is presented in: 
  Chung, M.K., Dalton, K.M., Shen, L., L., Evans, A.C., Davidson, R.J. 2006. 
  Unified cortical surface morphometry and its application to quantifying
  amount of gray matter. 
  Technical Report 1122. 
  Department of Statistics, University of Wisconsin-Madison.
  http://www.stat.wisc.edu/~mchung/papers/TR1122.2006.pdf 

-------------------------------------------
 For examples, see script @Spharm.examples  
-------------------------------------------

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.

  SUMA communication options:
      -talk_suma: Send progress with each iteration to SUMA.
      -refresh_rate rps: Maximum number of updates to SUMA per second.
                         The default is the maximum speed.
      -send_kth kth: Send the kth element to SUMA (default is 1).
                     This allows you to cut down on the number of elements
                     being sent to SUMA.
      -sh : Name (or IP address) of the computer running SUMA.
                      This parameter is optional, the default is 127.0.0.1 
      -ni_text: Use NI_TEXT_MODE for data transmission.
      -ni_binary: Use NI_BINARY_MODE for data transmission.
                  (default is ni_binary).
      -feed_afni: Send updates to AFNI via SUMA's talk.



   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     



AFNI program: SpharmReco

Spherical Harmonics Reconstruction from a set of harmonics 
and their corresponding coefficients.

Usage: 
  SpharmReco <-i_TYPE S> <-l L>
             <-bases_prefix BASES>
             <-coef BETA.0> <-coef BETA.1> ...
             [<-prefix PREFIX>] [<-o_TYPE SDR> ...]
             [-debug DBG]  [-sigma s]
Input:
  -i_TYPE SURF: SURF is a surface that is only used to provide
                the topology of the mesh (the nodes' connections)
  -l L: Decomposition order
  -bases_prefix BASES_PREFIX: Files containing the bases functions (spherical
                              harmonics). See SpharmDeco for generating these
                              files.
  -coef COEF.n: BETA.n is the coefficients file that is used to recompose 
                the nth data column. These files are created with SpharmDeco.
                You can specify N coefficient files by repeating the 
                option on command line. If N is a multiple 
                of three AND you use -o_TYPE option, then each three 
                consecutive files are considered to form the XYZ coordinates
                of a surface. See sample commands in @Spharm.examples 
  -prefix PREFIX: Write out the reconstructed data into dataset PREFIX. 
                  the output dataset contains N columns; one for each of the
                  COEF.n files.
  -o_TYPE SDR: Write out a new surface with reconstructed coordinates.
               This requires N to be a multiple of 3, so 6 -coef options
               will result in 2 surfaces written to disk. The naming of the
               surfaces depends on the number of -o_TYPE options used, much 
               like in SpharmDeco
  -debug DBG: Debug levels (1-3)
  -sigma s: Smoothing parameter (0 .. 0.001) which weighs down the 
            contribution of higher order harmonics.

-----------------------------------------------------------------------
 For more detail, references, and examples, see script @Spharm.examples  
-----------------------------------------------------------------------

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.

  SUMA communication options:
      -talk_suma: Send progress with each iteration to SUMA.
      -refresh_rate rps: Maximum number of updates to SUMA per second.
                         The default is the maximum speed.
      -send_kth kth: Send the kth element to SUMA (default is 1).
                     This allows you to cut down on the number of elements
                     being sent to SUMA.
      -sh : Name (or IP address) of the computer running SUMA.
                      This parameter is optional, the default is 127.0.0.1 
      -ni_text: Use NI_TEXT_MODE for data transmission.
      -ni_binary: Use NI_BINARY_MODE for data transmission.
                  (default is ni_binary).
      -feed_afni: Send updates to AFNI via SUMA's talk.


   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     



AFNI program: Surf2VolCoord

Usage: Surf2VolCoord_demo <-i_TYPE SURFACE> 
                      <-grid_parent GRID_VOL> 
                      [-grid_subbrick GSB]
                      [-sv SURF_VOL] 
                      [-one_node NODE]
 
  Illustrates how surface coordinates relate to voxel grid.  The program outputs surface and equivalent volume coordinates
  for all nodes in the surface after it is aligned via its sv.
  The code is intended as a source code demo.

  Mandatory Parameters:
     -i_TYPE SURFACE: Specify input surface.
             You can also use -t* and -spec and -surf
             methods to input surfaces. See below
             for more details.
     -prefix PREFIX: Prefix of output dataset.
     -grid_parent GRID_VOL: Specifies the grid for the
                  output volume.
  Optional Parameters:
     -grid_subbrick GSB: Sub-brick from which data are taken.
     -one_node NODE: Output results for node NODE only.

The output is lots of text so you're better off
redirecting to a file.
Once you load a surface and its surface volume,,
its node coordinates are transformed based on the
surface format type and the transforms stored in
the surface volume. At this stage, the node coordinates
are in what we call RAImm DICOM where x coordinate is
from right (negative) to left (positive) and y coordinate
from anterior to posterior and z from inferior to superior
This RAI coordinate corresponds to the mm coordinates
displayed by AFNI in the top left corner of the controller
when you have RAI=DICOM order set (right click on coordinate
text are to see option. When you open the surface with the
same sv in SUMA and view the sv volume in AFNI, the coordinate
of a node on an anatomically correct surface should be close
to the coordinate displayed in AFNI.
In the output, RAImm is the coordinate just described for a 
particular node.
The next coordinate in the output is called 3dfind, which stands
for three dimensional float index. 3dfind is a transformation 
of the RAImm coordinates to a coordinate in the units of the
voxel grid. The voxel with the closest center to a location
at RAImm would then be at round(3dfind). In other terms, 
RAImm is the coordinate closest to voxel  
 V(round(3dfind[0]), round(3dfind[1]), round(3dfind[2])
To see index coordinates, rather than mm coordinates in 
AFNI, set: Define Datamode --> Misc --> Voxel Coords?
Note that the index coordinates would be different for the
underlay and overlay because they are usually at different
resolution and/or orientation. To see the overlay coordinates
make sure you have 'See Overlay' turned on.
The last value in the output is the value from the chosen
sub-brick

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfClust

Usage: A program to perform clustering analysis surfaces.
  SurfClust <-spec SpecFile> 
            <-surf_A insurf> 
            <-input inData.1D dcol_index> 
            <-rmm rad>
            [-amm2 minarea]
            [-prefix OUTPREF]  
            [-out_clusterdset] [-out_roidset] 
            [-out_fulllist]
            [-sort_none | -sort_n_nodes | -sort_area]

  The program can outputs a table of the clusters on the surface,
  a mask dataset formed by the different clusters and a clustered
  version of the input dataset.

  Mandatory parameters:
     -spec SpecFile: The surface spec file.
     -surf_A insurf: The input surface name.
     -input inData.1D dcol_index: The input 1D dataset
                                  and the index of the
                                  datacolumn to use
                                  (index 0 for 1st column).
                                  Values of 0 indicate 
                                  inactive nodes.
     -rmm rad: Maximum distance between an activated node
               and the cluster to which it belongs.
               Distance is measured on the surface's graph (mesh).

  Optional Parameters:
     -thresh_col tcolind: Index of thresholding column.
                          Default is column 0.
      -thresh tval: Apply thresholding prior to clustering.
                   A node n is considered if thresh_col[n] > tval.
     -athresh tval: Apply absolute thresholding prior to clustering.
                    A node n is considered if | thresh_col[n] | > tval.
     -amm2 minarea: Do not output resutls for clusters having
                    an area less than minarea.
     -prefix OUTPREF: Prefix for output.
                      Default is the prefix of 
                      the input dataset.
                      If this option is used, the
                      cluster table is written to a file called
                      OUTPREF_ClstTable_rXX_aXX.1D. Otherwise the
                      table is written to stdout. 
                      You can specify the output format by adding
                      extensions to OUTPREF. For example, 
                      OUTPREF.1D.dset will force the output to be 
                      in the .1D format. 
                      See ConvertDset for many more format options.
     -out_clusterdset: Output a clustered version of inData.1D 
                       preserving only the values of nodes that 
                       belong to clusters that passed the rmm and amm2
                       conditions above.
                       The clustered dset's prefix has
                       _Clustered_rXX_aXX affixed to the OUTPREF
     -out_roidset: Output an ROI dataset with the value
                   at each node being the rank of its
                   cluster. The ROI dataset's prefix has
                   _ClstMsk_rXX_aXX affixed to the OUTPREF
                   where XX represent the values for the
                   the -rmm and -amm2 options respectively.
                   The program will not overwrite pre-existing
                   dsets.
     -prepend_node_index: Force the output dataset to have node
                    indices in column 0 of output. Use this option
                    if you are parsing .1D format datasets.
     -out_fulllist: Output a value for all nodes of insurf.
                    This option must be used in conjuction with
                    -out_roidset and/or out_clusterdset.
                    With this option, the output files might
                    be mostly 0, if you have small clusters.
                    However, you should use it if you are to 
                    maintain the same row-to-node correspondence
                    across multiple datasets.
     -sort_none: No sorting of ROI clusters.
     -sort_n_nodes: Sorting based on number of nodes
                    in cluster.
     -sort_area: Sorting based on area of clusters 
                 (default).
     -update perc: Pacify me when perc of the data have been
                   processed. perc is between 1% and 50%.
                   Default is no update.
     -no_cent: Do not find the central nodes.
               Finding the central node is a 
               relatively slow operation. Use
               this option to skip it.

  The cluster table output:
  A table where ach row shows results from one cluster.
  Each row contains 13 columns:   
     Col. 0  Rank of cluster (sorting order).
     Col. 1  Number of nodes in cluster.
     Col. 2  Total area of cluster. Units are the 
             the surface coordinates' units^2.
     Col. 3  Mean data value in cluster.
     Col. 4  Mean of absolute data value in cluster.
     Col. 5  Central node of cluster (see below).
     Col. 6  Weighted central node (see below).
     Col. 7  Minimum value in cluster.
     Col. 8  Node where minimum value occurred.
     Col. 9  Maximum value in cluster.
     Col. 10 Node where maximum value occurred.
     Col. 11 Variance of values in cluster.
     Col. 12 Standard error of the mean ( sqrt(variance/number of nodes) ).
   The CenterNode n is such that: 
   ( sum (Uia * dia * wi) ) - ( Uca * dca * sum (wi) ) is minimal
     where i is a node in the cluster
           a is an anchor node on the surface
           sum is carried over all nodes i in a cluster
           w. is the weight of a node 
              = 1.0 for central node 
              = value at node for the weighted central node
           U.. is the unit vector between two nodes
           d.. is the distance between two nodes on the graph
              (an approximation of the geodesic distance)
   If -no_cent is used, CenterNode columns are set to 0.

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfDist

Usage: SurfDist  [OPTIONS]  
       Output shortest distance between NODEPAIRS along
       the nesh of SURFACE.

Mandatory options:
   : Surface on which distances are computed.
              (For option's syntax, see 
              'Specifying input surfaces' section below).
   : Specifying node pairs can be done in two ways

     : A dataset of two columns where each row
               specifies a node pair.
               (For option's syntax, see 
              'SUMA dataset input options' section below).
   or
     <-from_node START>: Specify one starting node.
     : Specify one column of 'To' node indices.
                 Node pairs are between START and each node
                 in TO_NODES.
               (For option's syntax, see 
              'SUMA dataset input options' section below).

Optional stuff:
  -node_path_do PATH_DO: Output the shortest path between
                         each node pair as a SUMA Displayable
                         object.

  example 1:
     echo make a toy surface
     CreateIcosahedron
     echo Create some nodepairs
     echo 2 344 > nodelist.1D
     echo 416 489 >> nodelist.1D
     echo 123 32414 >> nodelist.1D
     echo Get distances and write out results in a 1D file
     SurfDist -i CreateIco_surf.asc \
              -input nodelist.1D \
              -node_path_do node_path   > example.1D
     echo 'The internode distances are in this file:'
     cat example.1D
     echo 'And you can visualize the paths this way:'
     suma -niml &
     DriveSuma -com show_surf -label ico \
                       -i_fs CreateIco_surf.asc \
               -com viewer_cont -load_do node_path.1D.do

  example 2: (for tcsh)
     echo Say one has a filled ROI called: Area.niml.roi on 
     echo a surface called lh.smoothwm.asc.
     set apref = Area
     set surf = lh.smoothwm.asc
     echo Create a dataset from this ROI with:
     ROI2dataset -prefix ${apref} -input ${apref}.niml.roi
     echo Get the nodes column forming the area
     ConvertDset -i ${apref}.niml.dset'[i]' -o_1D_stdout \
                              > ${apref}Nodes.1D 
     echo Calculate distance from node 85329 to each of ${apref}Nodes.1D
     SurfDist  -from_node 85329 -input ${apref}Nodes.1D \
               -i ${surf}   > ${apref}Dists.1D
     echo Combine node indices and distances from node  85329
     1dcat ${apref}Nodes.1D ${apref}Dists.1D'[2]' \
                                  > welt.1D.dset 
     echo Now load welt.1D.dset and overlay on surface
     echo Distances are in the second column
     echo 'And you can visualize the distances this way:'
     suma -niml &
     sleep 4
     DriveSuma -com show_surf -label oke \
                       -i_fs ${surf} \
               -com  pause hit enter when surface is ready \
               -com surf_cont -load_dset welt.1D.dset \
                              -I_sb 1 -T_sb 1 -T_val 0.0 

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying a surface using -surf_? method:
    -surf_A SURFACE: specify the name of the first
            surface to load. If the program requires
            or allows multiple surfaces, use -surf_B
            ... -surf_Z .
            You need not use _A if only one surface is
            expected.
            SURFACE is the name of the surface as specified
            in the SPEC file. The use of -surf_ option 
            requires the use of -spec option.

  SUMA dataset input options:
      -input DSET: Read DSET1 as input.
                   In programs accepting multiple input datasets
                   you can use -input DSET1 -input DSET2 or 
                   input DSET1 DSET2 ...
       NOTE: Selecting subsets of a dataset:
             Much like in AFNI, you can select subsets of a dataset
             by adding qualifiers to DSET.
           Append #SEL# to select certain nodes.
           Append [SEL] to select certain columns.
           Append {SEL} to select certain rows.
           The format of SEL is the same as in AFNI, see section:
           'INPUT DATASET NAMES' in 3dcalc -help for details.
           Append [i] to get the node index column from
                      a niml formatted dataset.
           *  SUMA does not preserve the selection order 
              for any of the selectors.
              For example:
              dset[44,10..20] is the same as dset[10..20,44]



 SUMA mask options:
      -n_mask INDEXMASK: Apply operations to nodes listed in
                            INDEXMASK  only. INDEXMASK is a 1D file.
      -b_mask BINARYMASK: Similar to -n_mask, except that the BINARYMASK
                          1D file contains 1 for nodes to filter and
                          0 for nodes to be ignored.
                          The number of rows in filter_binary_mask must be
                          equal to the number of nodes forming the
                          surface.
      -c_mask EXPR: Masking based on the result of EXPR. 
                    Use like afni's -cmask options. 
                    See explanation in 3dmaskdump -help 
                    and examples in output of 3dVol2Surf -help
      NOTE: Unless stated otherwise, if n_mask, b_mask and c_mask 
            are used simultaneously, the resultant mask is the intersection
            (AND operation) of all masks.


   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfDsetInfo

Usage: SurfDsetInfo [options] -input DSET1 -input DSET2 ...
   or: SurfDsetInfo [options] DSET1 DSET2 ... 
   Optional Params:
      -debug DBG: if DBG = 2, show dset->ngr in its entirety in NIML form.

  SUMA dataset input options:
      -input DSET: Read DSET1 as input.
                   In programs accepting multiple input datasets
                   you can use -input DSET1 -input DSET2 or 
                   input DSET1 DSET2 ...
       NOTE: Selecting subsets of a dataset:
             Much like in AFNI, you can select subsets of a dataset
             by adding qualifiers to DSET.
           Append #SEL# to select certain nodes.
           Append [SEL] to select certain columns.
           Append {SEL} to select certain rows.
           The format of SEL is the same as in AFNI, see section:
           'INPUT DATASET NAMES' in 3dcalc -help for details.
           Append [i] to get the node index column from
                      a niml formatted dataset.
           *  SUMA does not preserve the selection order 
              for any of the selectors.
              For example:
              dset[44,10..20] is the same as dset[10..20,44]


   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfInfo

Usage: SurfInfo [options]  
   surface: A surface specified in any of the methods 
            shown below.
   Optional Params:
     -detail DETAIL: 1 = calculate surface metrics.
     -debug DEBUG: Debugging level (2 turns LocalHead ON)
 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfMeasures

SurfMeasures - compute measures from the surface dataset(s)

  usage: SurfMeasures [options] -spec SPEC_FILE -out_1D OUTFILE.1D

    This program is meant to read in a surface or surface pair,
    and to output and user-requested measures over the surfaces.
    The surfaces must be specified in the SPEC_FILE.

 ** Use the 'inspec' command for getting information about the
    surfaces in a spec file.

    The output will be a 1D format text file, with one column
    (or possibly 3) per user-specified measure function.  Some
    functions require only 1 surface, some require 2.

    Current functions (applied with '-func') include:

        ang_norms    : angular difference between normals
        ang_ns_A     : angular diff between segment and first norm
        ang_ns_B     : angular diff between segment and second norm
        coord_A      : xyz coordinates of node on first surface
        coord_B      : xyz coordinates of node on second surface
        n_area_A     : associated node area on first surface
        n_area_B     : associated node area on second surface
        n_avearea_A  : for each node, average area of triangles (surf A)
        n_avearea_B  : for each node, average area of triangles (surf B)
        n_ntri       : for each node, number of associated triangles
        node_vol     : associated node volume between surfaces
        nodes        : node number
        norm_A       : vector of normal at node on first surface
        norm_B       : vector of normal at node on second surface
        thick        : distance between surfaces along segment

------------------------------------------------------------

  examples:

    1. For each node on the surface smoothwm in the spec file,
       fred.spec, output the node number (the default action),
       the xyz coordinates, and the area associated with the
       node (1/3 of the total area of triangles having that node
       as a vertex).

        SurfMeasures                                   \
            -spec       fred1.spec                     \
            -sv         fred_anat+orig                 \
            -surf_A     smoothwm                       \
            -func       coord_A                        \
            -func       n_area_A                       \
            -out_1D     fred1_areas.1D                   

    2. For each node of the surface pair smoothwm and pial,
       display the:
         o  node index
         o  node's area from the first surface
         o  node's area from the second surface
         o  node's (approximate) resulting volume
         o  thickness at that node (segment distance)
         o  coordinates of the first segment node
         o  coordinates of the second segment node

         Additionally, display total surface areas, minimum and
         maximum thicknesses, and approximate total volume for the
         cortical ribbon (the sum of node volumes).

        SurfMeasures                                   \
            -spec       fred2.spec                     \
            -sv         fred_anat+orig                 \
            -surf_A     smoothwm                       \
            -surf_B     pial                           \
            -func       n_area_A                       \
            -func       n_area_B                       \
            -func       node_vol                       \
            -func       thick                          \
            -func       coord_A                        \
            -func       coord_B                        \
            -info_area                                 \
            -info_thick                                \
            -info_vol                                  \
            -out_1D     fred2_vol.1D                     

    3. For each node of the surface pair, display the:
         o  node index
         o  angular diff between the first and second norms
         o  angular diff between the segment and first norm
         o  angular diff between the segment and second norm
         o  the normal vectors for the first surface nodes
         o  the normal vectors for the second surface nodes
         o  angular diff between the segment and second norm

        SurfMeasures                                   \
            -spec       fred2.spec                     \
            -surf_A     smoothwm                       \
            -surf_B     pial                           \
            -func       ang_norms                      \
            -func       ang_ns_A                       \
            -func       ang_ns_B                       \
            -func       norm_A                         \
            -func       norm_B                         \
            -out_1D     fred2_norm_angles.1D             

    4. Similar to #3, but output extra debug info, and in
       particular, info regarding node 5000.

        SurfMeasures                                   \
            -spec       fred2.spec                     \
            -sv         fred_anat+orig                 \
            -surf_A     smoothwm                       \
            -surf_B     pial                           \
            -func       ang_norms                      \
            -func       ang_ns_A                       \
            -func       ang_ns_B                       \
            -debug      2                              \
            -dnode      5000                           \
            -out_1D     fred2_norm_angles.1D             

    5. For each node, output the  volume (approximate), thickness
       and areas, but restrict the nodes to the list contained in
       column 0 of file sdata.1D.  Furthermore, restrict those 
       nodes to the mask inferred by the given '-cmask' option.

        SurfMeasures                                                   \
            -spec       fred2.spec                           \
            -sv         fred_anat+orig                       \
            -surf_A     smoothwm                             \
            -surf_B     pial                                 \
            -func       node_vol                             \
            -func       thick                                \
            -func       n_area_A                             \
            -func       n_area_B                             \
            -nodes_1D   'sdata.1D[0]'                        \
            -cmask      '-a sdata.1D[2] -expr step(a-1000)'  \
            -out_1D     fred2_masked.1D                  

------------------------------------------------------------

  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE       : SUMA spec file

        e.g. -spec fred2.spec

        The surface specification file contains a list of
        related surfaces.  In order for a surface to be
        processed by this program, it must exist in the spec
        file.

    -surf_A SURF_NAME     : surface name (in spec file)
    -surf_B SURF_NAME     : surface name (in spec file)

        e.g. -surf_A smoothwm
        e.g. -surf_A lh.smoothwm
        e.g. -surf_B lh.pial

        This is used to specify which surface(s) will be used
        by the program.  The 'A' and 'B' correspond to other
        program options (e.g. the 'A' in n_area_A).

        The '-surf_B' parameter is required only when the user
        wishes to input two surfaces.

        Any surface name provided must be unique in the spec
        file, and must match the name of the surface data file
        (e.g. lh.smoothwm.asc).

    -out_1D OUT_FILE.1D   : 1D output filename

        e.g. -out_1D pickle_norm_info.1D

        This option is used to specify the name of the output
        file.  The output file will be in the 1D ascii format,
        with 2 rows of comments for column headers, and 1 row
        for each node index.

        There will be 1 or 3 columns per '-func' option, with
        a default of 1 for "nodes".

------------------------------------------------------------

  ALPHABETICAL LISTING OF OPTIONS:

    -cmask COMMAND        : restrict nodes with a mask

        e.g.     -cmask '-a sdata.1D[2] -expr step(a-1000)'

        This option will produce a mask to be applied to the
        list of surface nodes.  The total mask size, including
        zero entries, must match the number of nodes.  If a
        specific node list is provided via the '-nodes_1D'
        option, then the mask size should match the length of
        the provided node list.
        
        Consider the provided example using the file sdata.1D.
        If a surface has 100000 nodes (and no '-nodes_1D' option
        is used), then there must be 100000 values in column 2
        of the file sdata.1D.

        Alternately, if the '-nodes_1D' option is used, giving
        a list of 42 nodes, then the mask length should also be
        42 (regardless of 0 entries).

        See '-nodes_1D' for more information.

    -debug LEVEL          : display extra run-time info

        e.g.     -debug 2
        default: -debug 0

        Valid debug levels are from 0 to 5.

    -dnode NODE           : display extra info for node NODE

        e.g. -dnode 5000

        This option can be used to display extra information
        about node NODE during surface evaluation.

    -func FUNCTION        : request output for FUNCTION

        e.g. -func thick

        This option is used to request output for the given
        FUNCTION (measure).  Some measures produce one column
        of output (e.g. thick or ang_norms), and some produce
        three (e.g. coord_A).  These options, in the order they
        are given, determine the structure of the output file.

        Current functions include:

            ang_norms    : angular difference between normals
            ang_ns_A     : angular diff between segment and first norm
            ang_ns_B     : angular diff between segment and second norm
            coord_A      : xyz coordinates of node on first surface
            coord_B      : xyz coordinates of node on second surface
            n_area_A     : associated node area on first surface
            n_area_B     : associated node area on second surface
            n_avearea_A  : for each node, average area of triangles (surf A)
            n_avearea_B  : for each node, average area of triangles (surf B)
            n_ntri       : for each node, number of associated triangles
            node_vol     : associated node volume between surfaces
            nodes        : node number
            norm_A       : vector of normal at node on first surface
            norm_B       : vector of normal at node on second surface
            thick        : distance between surfaces along segment

          Note that the node volumes are approximations.  Places
          where either normal points in the 'wrong' direction
          will be incorrect, as will be the parts of the surface
          that 'encompass' this region.  Maybe we could refer
          to this as a mushroom effect...

          Basically, expect the total volume to be around 10%
          too large.

          ** for more accuracy, try 'SurfPatch -vol' **

    -help                 : show this help menu

    -hist                 : display program revision history

        This option is used to provide a history of changes
        to the program, along with version numbers.

  NOTE: the following '-info_XXXX' options are used to display
        pieces of 'aggregate' information about the surface(s).

    -info_all             : display all final info

        This is a short-cut to get all '-info_XXXX' options.

    -info_area            : display info on surface area(s)

        Display the total area of each triangulated surface.

    -info_norms           : display info about the normals

        For 1 or 2 surfaces, this will give (if possible) the
        average angular difference between:

            o the normals of the surfaces
            o the connecting segment and the first normal
            o the connecting segment and the second normal

    -info_thick           : display min and max thickness

        For 2 surfaces, this is used to display the minimum and
        maximum distances between the surfaces, along each of
        the connecting segments.

    -info_vol             : display info about the volume

        For 2 surfaces, display the total computed volume.
        Note that this node-wise volume computation is an
        approximation, and tends to run ~10 % high.

        ** for more accuracy, try 'SurfPatch -vol' **

    -nodes_1D NODELIST.1D : request output for only these nodes

        e.g.  -nodes_1D node_index_list.1D
        e.g.  -nodes_1D sdata.1D'[0]'

        The NODELIST file should contain a list of node indices.
        Output from the program would then be restricted to the
        nodes in the list.
        
        For instance, suppose that the file BA_04.1D contains
        a list of surface nodes that are located in Broadman's
        Area 4.  To get output from the nodes in that area, use:
        
            -nodes_1D BA_04.1D
        
        For another example, suppose that the file sdata.1D has
        node indices in column 0, and Broadman's Area indices in
        column 3.  To restrict output to the nodes in Broadman's
        area 4, use the pair of options:
        
            -nodes_1D 'sdata.1D[0]'                     \
            -cmask '-a sdata.1D[3] -expr (1-bool(a-4))' 

    -sv SURF_VOLUME       : specify an associated AFNI volume

        e.g. -sv fred_anat+orig

        If there is any need to know the orientation of the
        surface, a surface volume dataset may be provided.

    -ver                  : show version information

        Show version and compile date.

------------------------------------------------------------

  Author: R. Reynolds  - version 1.11 (October 6, 2004)




AFNI program: SurfMesh

Usage:
  SurfMesh <-i_TYPE SURFACE> <-o_TYPE OUTPUT> <-edges FRAC> 
           [-sv SURF_VOL]
 
  Example:
  SurfMesh -i_ply surf1.ply -o_ply surf1_half -edges 0.5

  Mandatory parameters:
     -i_TYPE SURFACE: Input surface. See below for details. 
              You can also use the -t* method or
              the -spec SPECFILE -surf SURFACE method.
     -o_TYPE OUTPUT: Output surface, see below.
     -edges FRAC: surface will be simplified to number of
              edges times FRAC (fraction). Default is .5
              refines surface if edges > 1

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying a surface using -surf_? method:
    -surf_A SURFACE: specify the name of the first
            surface to load. If the program requires
            or allows multiple surfaces, use -surf_B
            ... -surf_Z .
            You need not use _A if only one surface is
            expected.
            SURFACE is the name of the surface as specified
            in the SPEC file. The use of -surf_ option 
            requires the use of -spec option.
 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

 Originally written by Jakub Otwinowski.
 Now maintained by Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     
 This program uses the GTS library gts.sf.net
 for fun read "Fast and memory efficient polygonal simplification" (1998) 
 and "Evaluation of memoryless simplification" (1999) by Lindstrom and Turk.



AFNI program: SurfPatch

Usage:
  SurfPatch <-spec SpecFile> <-surf_A insurf> <-surf_B insurf> ...
            <-input nodefile inode ilabel> <-prefix outpref>  
            [-hits min_hits] [-masklabel msk] [-vol] [-patch2surf]

Usage 1:
  The program creates a patch of surface formed by nodes 
  in nodefile.
  Mandatory parameters:
     -spec SpecFile: Spec file containing input surfaces.
     -surf_X: Name of input surface X where X is a character
              from A to Z. If surfaces are specified using two
              files, use the name of the node coordinate file.
     -input nodefile inode ilabel: 
            nodefile is the file containing nodes defining the patch.
            inode is the index of the column containing the nodes
            ilabel is the index of the column containing labels of
                   the nodes in column inode. If you want to use
                   all the nodes in column indode, then set this 
                   parameter to -1 (default). 
                   If ilabel is not equal to 0 then the corresponding 
                   node is used in creating the patch.
                   See -masklabel option for one more variant.
     -prefix outpref: Prefix of output patch. If more than one surface
                      are entered, then the prefix will have _X added
                      to it, where X is a character from A to Z.
                      Output format depends on the input surface's.
                      With that setting, checking on pre-existing files
                      is only done before writing the new patch, which is
                      annoying. You can set the output type ahead of time
                      using -out_type option. This way checking for pre-existing
                      output files can be done at the outset.

  Optional parameters:
     -coord_gain GAIN: Multiply node coordinates by a GAIN.
                       That's useful if you have a tiny patch that needs
                       enlargement for easier viewing in SUMA.
                       Although you can zoon over very large ranges in SUMA
                       tiny tiny patches are hard to work with because
                       SUMA's parameters are optimized to work with objects
                       on the order of a brain, not on the order of 1 mm.
                       WARNING: Do not use this option if you are measuring
                       the volume of a patch!
     -out_type TYPE: Type of all output patches, regardless of input surface type.
                     Choose from: FreeSurfer, SureFit, 1D and Ply.
     -hits min_hits: Minimum number of nodes specified for a triangle
                     to be made a part of the patch (1 <= min_hits <= 3)
                     default is 2.
     -masklabel msk: If specified, then only nodes that are labeled with
                     with msk are considered for the patch.
                     This option is useful if you have an ROI dataset file
                     and whish to create a patch from one out of many ROIs
                     in that file. This option must be used with ilabel 
                     specified (not = -1)
     -patch2surf: Turn surface patch into a surface where only nodes used in
                  forming the mesh are preserved.

Usage 2:
  The program can also be used to calculate the volume between the same patch
  on two isotopic surfaces. See -vol option below.
      -vol: Calculate the volume formed by the patch on surf_A and
            and surf_B. For this option, you must specify two and
            only two surfaces with surf_A and surf_B options.
      -vol_only: Only calculate the volume, don't write out patches.

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfQual

Usage: A program to check the quality of surfaces.
  SurfQual <-spec SpecFile> <-surf_A insurf> <-surf_B insurf> ...
             <-sphere> [-self_intersect] [-prefix OUTPREF]  

  Mandatory parameters:
     -spec SpecFile: Spec file containing input surfaces.
     -surf_X: Name of input surface X where X is a character
              from A to Z. If surfaces are specified using two
              files, use the name of the node coordinate file.
  Mesh winding consistency and 2-manifold checks are performed
  on all surfaces.
  Optional parameters:
     -summary: Provide summary of results to stdout
     -self_intersect: Check if surface is self intersecting.
                      This option is rather slow, so be patient.
                      In the presence of intersections, the output file
                      OUTPREF_IntersNodes.1D.dset will contain the indices
                      of nodes forming segments that intersect the surface.
  Most other checks are specific to spherical surfaces (see option below).
     -sphere: Indicates that surfaces read are spherical.
              With this option you get the following output.
              - Absolute deviation between the distance (d) of each
                node from the surface's center and the estimated
                radius(r). The distances, abs (d - r), are 
                and written to the file OUTPREF_Dist.1D.dset .
                The first column represents node index and the 
                second is the absolute distance. A colorized 
                version of the distances is written to the file 
                OUTPREF_Dist.1D.col (node index followed 
                by r g b values). A list of the 10 largest absolute
                distances is also output to the screen.
              - Also computed is the cosine of the angle between 
                the normal at a node and the direction vector formed
                formed by the center and that node. Since both vectors
                are normalized, the cosine of the angle is the dot product.
                On a sphere, the abs(dot product) should be 1 or pretty 
                close. Nodes where abs(dot product) < 0.9 are flagged as
                bad and written out to the file OUTPREF_BadNodes.1D.dset .
                The file OUTPREF_dotprod.1D.dset contains the dot product 
                values for all the nodes. The files with colorized results
                are OUTPREF_BadNodes.1D.col and OUTPREF_dotprod.1D.col .
                A list of the bad nodes is also output to the screen for
                convenience. You can use the 'j' option in SUMA to have
                the cross-hair go to a particular node. Use 'Alt+l' to
                have the surface rotate and place the cross-hair at the
                center of your screen.
              NOTE: For detecting topological problems with spherical
                surfaces, I find the dot product method to work best.
  Optional parameters:
     -prefix OUTPREF: Prefix of output files. If more than one surface
                      are entered, then the prefix will have _X added
                      to it, where X is a character from A to Z.
                      THIS PROGRAM WILL OVERWRITE EXISTING FILES.
                      Default prefix is the surface's label.

  Comments:
     - The colorized (.col) files can be loaded into SUMA (with the 'c' 
     option. By focusing on the bright spots, you can find trouble spots
     which would otherwise be very difficult to locate.
     - You should also pay attention to the messages output when the 
     surfaces are being loaded, particularly to edges (segments that 
     join 2 nodes) are shared by more than 2 triangles. For a proper
     closed surface, every segment should be shared by 2 triangles. 
     For cut surfaces, segments belonging to 1 triangle only form
     the edge of that surface.
     - There are no utilities within SUMA to correct these defects.
     It is best to fix these problems with the surface creation
     software you are using.
     - Some warnings may be redundant. That should not hurt you.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfSmooth

Usage:  SurfSmooth <-SURF_1> <-met method> 

   Some methods require additional options detailed below.
   I recommend using the -talk_suma option to watch the 
   progression of the smoothing in real-time in suma.

   Method specific options:
      HEAT_07: <-input inData.1D> <-target_fwhm F>   
            This method is used to filter data
            on the surface. It is a significant
            improvement on HEAT_05.
      HEAT_05: <-input inData.1D> <-fwhm F>  
            Formerly known as HEAT, this method is used 
            to filter data on the surface. 
            Parameter choice is tricky however as one
            needs to take into account mesh dimensions,
            desired FWHM, and the data's starting FWHM in 
            order to make an appropriate selection.
            Consider using HEAT_07 if applicable.
            Note that this version will select the number
            of iterations to avoid precision errors.
      LM: [-kpb k] [-lm l m] [-surf_out surfname] [-iw weights]
          This method is used to filter the surface's
          geometry (node coordinates).
      NN_geom: smooth by averaging coordinates of 
               nearest neighbors.
               This method causes shrinkage of surface
               and is meant for test purposes only.

   Common options:
      [-Niter N] [-output out.1D] [-h/-help] [-dbg_n node]
      [-add_index] [-ni_text|-ni_binary] [-talk_suma] [-MASK] 


   Detailed usage:
     (-SURF_1):  An option for specifying the surface to smooth or
                 the domain over which DSET is defined.
                 (For option's syntax, see 'Specifying input surfaces'
                 section below).
     (-MASK)  :  An option to specify a node mask so that only
                 nodes in the mask are used in the smoothing.
                 See section 'SUMA mask options' for details on
                 the masking options.
      -met method: name of smoothing method to use. Choose from:
                 HEAT_07: A significant improvement on HEAT_05.
                         This method is used for filtering 
                         data on the surface and not for smoothing 
                         the surface's geometry per se. 
                         This method makes more appropriate parameter
                         choices that take into account:
                         - Numerical precision issues
                         - Mesh resolution
                         - Starting and Target FWHM
                 HEAT_05: The newer method by Chung et al. [Ref. 3&4 below]
                         Consider using HEAT_07 if applicable.
                 LM: The smoothing method proposed by G. Taubin 2000
                     This method is used for smoothing
                     a surface's geometry. See References below.
                 NN_geom: A simple nearest neighbor coordinate smoothing.
                          This interpolation method causes surface shrinkage
                          that might need to be corrected with the -match_*
                          options below. 


   Options for HEAT_07 (see @SurfSmooth.HEAT_07.examples for examples):
      -input inData : file containing data (in 1D or NIML format)
                        Each column in inData is processed separately.
                        The number of rows must equal the number of
                        nodes in the surface. You can select certain
                        columns using the [] notation adopted by AFNI's
                  Note: The program will infer the format of the input
                        file from the extension of inData. 
                        programs.
      -fwhm F: Blur by a Gaussian filter that has a Full Width at Half 
               Maximum in surface coordinate units (usuallly mm) of F.
               For Gaussian filters, FWHM, SIGMA (STD-DEV) and RMS
               FWHM = 2.354820 * SIGMA = 1.359556 * RMS
               The program first estimates the initial dataset's smoothness
               and determines the final FWHM (FF) that would result from 
               the added blurring by the filter of width F.
               The progression of FWHM is estimated with each iteration, 
               and the program stops when the dataset's smoothness reaches
               FF.
   or 
      -target_fwhm TF: Blur so that the final FWHM of the data is TF mm
                       This option avoids blurring already smooth data.
                       FWHM estimates are obtained from all the data
                       to be processed.
      -blurmaster BLURMASTER: Blur so that the final FWHM of dataset
                       BLURMASTER is TF mm, then use the same blurring
                       parameters on inData. In most cases, 
                       you ought to use the -blurmaster option in 
                       conjunction with options -fwhm and target_fwhm.
                       BLURMASTER is preferably the residual timeseries 
                       (errts)  from 3dDeconvolve. 
                       If using the residual is impractical, you can 
                       use the epi time series with detrending option below.
                       The two approaches give similar results for block 
                       design data  but we have not checked for randomised
                       event related designs.
                       After detrending (see option -detrend_master), a 
                       subset of sub-bricks will be selected for estimating 
                       the smoothness.
                       Using all the sub-bricks would slow the program down.
                       The selection is similar to what is done in 
                       3dBlurToFWHM.
                       At most 32 sub-bricks are used and they are selected 
                       to be scattered throughout the timeseries. You can
                       use -bmall to force the use of all sub-bricks.
                 N.B.: Blurmaster must be a time series with a continuous
                       time axis. No catenated time series should be used
                       here.
      -detrend_master [q]: Detrend blurmaster with 2*q+3 basis functions 
                           with q > 0.
                         default is -1 where q = NT/30.
                         This option should be used when BLURMASTER is an
                         epi time series.
                         There is no need for detrending when BLURMASTER 
                         is the residual
                         from a linear regression analysis.
      -no_detrend_master: Do not detrend the master. That would be used 
                          if you are using residuals for master.
      -detpoly_master p: Detrend blurmaster with polynomials of order p.
      -detprefix_master d: Save the detrended blurmaster into a dataset 
                           with prefix 'd'.
      -bmall: Use all sub-bricks in master for FWHM estimation.
      -detrend_in [q]: Detrend input before blurring it, then retrend 
                       it afterwards. Default is no detrending.
                       Detrending mode is similar to detrend_master.
      -detpoly_in p: Detrend input before blurring then retrend.
                     Detrending mode is similar to detpoly_master.
      -detprefix_in d Save the detrended input into a dataset with
                      prefix 'd'.

   and optionally, one of the following two parameters:
      -Niter N: Number of iterations (default is -1).
                You can now set this parameter to -1 and have 
                the program suggest a value based on the surface's
                mesh density (average distance between nodes), 
                the desired and starting FWHM. 
                Too large or too small a number of iterations can affect 
                smoothing results. 
      -sigma  S: Bandwidth of smoothing kernel (for a single iteration).
                 S should be small (< 1) but not too small.
                 If the program is taking forever to run, with final
                 numbers of iteration in the upper hundreds, you can
                 increase the value of -sigma somewhat.
      -c_mask or -b_mask or -n_mask (see below for details):
                 Restrict smoothing to nodes in mask.
                 You should not include nodes with no data in 
                 the smoothing. Note that the mask is also applied 
                 to -blurmaster dataset and all estimations of FWHM.
                 For example:
                    If masked nodes have 0 for value in the input 
                    dataset's first (0th) sub-brick, use: 
                    -cmask '-a inData[0] -expr bool(a)'
   Notes:
   1- For those of you who know what they are doing, you can also skip 
   specifying fwhm options and specify Niter and sigma directly.

   Options for HEAT_05  (Consider HEAT_07 method):
      -input inData : file containing data (in 1D or NIML format)
                        Each column in inData is processed separately.
                        The number of rows must equal the number of
                        nodes in the surface. You can select certain
                        columns using the [] notation adopted by AFNI's
                  Note: The program will infer the format of the input
                        file from the extension of inData. 
                        programs.
      -fwhm F: Effective Full Width at Half Maximum in surface 
               coordinate units (usuallly mm) 
               of an equivalent Gaussian filter had the surface been flat.
               With curved surfaces, the equation used to estimate FWHM is 
               an approximation. For Gaussian filters, FWHM, SIGMA 
               (STD-DEV) and RMS are related by:
               FWHM = 2.354820 * SIGMA = 1.359556 * RMS
               Blurring on the surface depends on the geodesic instead 
               of the Euclidean distances. 
               Unlike with HEAT_07, no attempt is made here at direct
               estimation of smoothness.

      Optionally, you can add one of the following two parameters:
                     (See Refs #3&4 for more details)
      -Niter N: Number of iterations (default is -1).
                You can now set this parameter to -1 and have 
                the program suggest a value based on the -fwhm value.
                Too large or too small a number of iterations can affect 
                smoothing results. Acceptable values depend on 
                the average distance between nodes on the mesh and
                the desired fwhm. 
      -sigma  S: Bandwidth of smoothing kernel (for a single iteration).
                 S should be small (< 1) and is related to the previous two
                 parameters by: F = sqrt(N) * S * 2.355


   Options for LM:
      -kpb k: Band pass frequency (default is 0.1).
              values should be in the range 0 < k < 10
              -lm and -kpb options are mutually exclusive.
      -lm l m: Lambda and Mu parameters. Sample values are:
               0.6307 and -.6732
      NOTE: -lm and -kpb options are mutually exclusive.
      -surf_out surfname: Writes the surface with smoothed coordinates
                          to disk. For SureFit and 1D formats, only the
                          coord file is written out.
      NOTE: -surf_out and -output are mutually exclusive.
      -iw wgt: Set interpolation weights to wgt. You can choose from:
               Equal   : Equal weighting, fastest (default), 
                         tends to make edges equal.
               Fujiwara: Weighting based on inverse edge length.
                         Would be a better preserver of geometry when
                         mesh has irregular edge lengths.
               Desbrun : Weighting based on edge angles (slooow).
                         Removes tangential displacement during smoothing.
                         Might not be too useful for brain surfaces.

   Options for NN_geom:
      -match_size r: Adjust node coordinates of smoothed surface to 
                   approximates the original's size.
                   Node i on the filtered surface is repositioned such 
                   that |c i| = 1/N sum(|cr j|) where
                   c and cr are the centers of the smoothed and original
                   surfaces, respectively.
                   N is the number of nodes that are within r [surface 
                   coordinate units] along the surface (geodesic) from node i.
                   j is one of the nodes neighboring i.
      -match_vol tol: Adjust node coordinates of smoothed surface to 
                   approximates the original's volume.
                   Nodes on the filtered surface are repositioned such
                   that the volume of the filtered surface equals, 
                   within tolerance tol, that of the original surface. 
                   See option -vol in SurfaceMetrics for information about
                   and calculation of the volume of a closed surface.
      -match_area tol: Adjust node coordinates of smoothed surface to 
                   approximates the original's surface.
                   Nodes on the filtered surface are repositioned such
                   that the surface of the filtered surface equals, 
                   within tolerance tol, that of the original surface. 
      -match_sphere rad: Project nodes of smoothed surface to a sphere
                   of radius rad. Projection is carried out along the 
                   direction formed by the surface's center and the node.
      -surf_out surfname: Writes the surface with smoothed coordinates
                          to disk. For SureFit and 1D formats, only the
                          coord file is written out.

   Common options:
      -Niter N: Number of smoothing iterations (default is 100)
                For practical reasons, this number must be a multiple of 2
          NOTE 1: For HEAT method, you can set Niter to -1, in conjunction
                  with -fwhm FWHM option, and the program
                  will pick an acceptable number for you.
          NOTE 2: For LB_FEM method, the number of iterations controls the
                iteration steps (dt in Ref #1).
                dt = fwhm*fwhm / (16*Niter*log(2));
                dt must satisfy conditions that depend on the internodal
                distance and the spatial derivatives of the signals being 
                filtered on the surface.
                As a rule of thumb, if increasing Niter does not alter
                the results then your choice is fine (smoothing has
                converged).
                For an example of the artifact caused by small Niter see:
          http://afni.nimh.nih.gov/sscc/staff/ziad/SUMA/SuSmArt/DSart.html
                To avoid this problem altogether, it is better that you use 
                the newer method HEAT which does not suffer from this
                problem.
      -output OUT: Name of output file. 
                   The default is inData_sm with LB_FEM and HEAT method
                   and NodeList_sm with LM method.
             NOTE: For data smoothing methods like HEAT, If a format
                   extension, such as .1D.dset or .niml.dset is present 
                   in OUT, then the output will be written in that format.
                   Otherwise, the format is the same as the input's
      -overwrite : A flag to allow overwriting OUT
      -add_index : Output the node index in the first column.
                   This is not done by default.
      -dbg_n node : output debug information for node 'node'.
      -use_neighbors_outside_mask: When using -c_mask or -b_mask or -n_mask
                                   options, allow value from a node nj 
                                   neighboring node n to contribute to the 
                                   value at n even if nj is not in the mask.
                                   The default is to ignore all nodes not in
                                   the mask.

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying a surface using -surf_? method:
    -surf_A SURFACE: specify the name of the first
            surface to load. If the program requires
            or allows multiple surfaces, use -surf_B
            ... -surf_Z .
            You need not use _A if only one surface is
            expected.
            SURFACE is the name of the surface as specified
            in the SPEC file. The use of -surf_ option 
            requires the use of -spec option.
 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.

 SUMA mask options:
      -n_mask INDEXMASK: Apply operations to nodes listed in
                            INDEXMASK  only. INDEXMASK is a 1D file.
      -b_mask BINARYMASK: Similar to -n_mask, except that the BINARYMASK
                          1D file contains 1 for nodes to filter and
                          0 for nodes to be ignored.
                          The number of rows in filter_binary_mask must be
                          equal to the number of nodes forming the
                          surface.
      -c_mask EXPR: Masking based on the result of EXPR. 
                    Use like afni's -cmask options. 
                    See explanation in 3dmaskdump -help 
                    and examples in output of 3dVol2Surf -help
      NOTE: Unless stated otherwise, if n_mask, b_mask and c_mask 
            are used simultaneously, the resultant mask is the intersection
            (AND operation) of all masks.



  SUMA communication options:
      -talk_suma: Send progress with each iteration to SUMA.
      -refresh_rate rps: Maximum number of updates to SUMA per second.
                         The default is the maximum speed.
      -send_kth kth: Send the kth element to SUMA (default is 1).
                     This allows you to cut down on the number of elements
                     being sent to SUMA.
      -sh : Name (or IP address) of the computer running SUMA.
                      This parameter is optional, the default is 127.0.0.1 
      -ni_text: Use NI_TEXT_MODE for data transmission.
      -ni_binary: Use NI_BINARY_MODE for data transmission.
                  (default is ni_binary).
      -feed_afni: Send updates to AFNI via SUMA's talk.



 SUMA mask options:
      -n_mask INDEXMASK: Apply operations to nodes listed in
                            INDEXMASK  only. INDEXMASK is a 1D file.
      -b_mask BINARYMASK: Similar to -n_mask, except that the BINARYMASK
                          1D file contains 1 for nodes to filter and
                          0 for nodes to be ignored.
                          The number of rows in filter_binary_mask must be
                          equal to the number of nodes forming the
                          surface.
      -c_mask EXPR: Masking based on the result of EXPR. 
                    Use like afni's -cmask options. 
                    See explanation in 3dmaskdump -help 
                    and examples in output of 3dVol2Surf -help
      NOTE: Unless stated otherwise, if n_mask, b_mask and c_mask 
            are used simultaneously, the resultant mask is the intersection
            (AND operation) of all masks.


  SUMA communication options:
      -talk_suma: Send progress with each iteration to SUMA.
      -refresh_rate rps: Maximum number of updates to SUMA per second.
                         The default is the maximum speed.
      -send_kth kth: Send the kth element to SUMA (default is 1).
                     This allows you to cut down on the number of elements
                     being sent to SUMA.
      -sh : Name (or IP address) of the computer running SUMA.
                      This parameter is optional, the default is 127.0.0.1 
      -ni_text: Use NI_TEXT_MODE for data transmission.
      -ni_binary: Use NI_BINARY_MODE for data transmission.
                  (default is ni_binary).
      -feed_afni: Send updates to AFNI via SUMA's talk.


   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

   Sample commands lines for using SurfSmooth:
         The surface used in this example had no spec file, so 
         a quick.spec was created using:
         quickspec -tn 1D NodeList.1D FaceSetList.1D 

   Sample commands lines for data smoothing:
 
      For HEAT_07 method, see multiple examples with data in script
                  @SurfSmooth.HEAT_07.examples

      SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met HEAT_05   \
                  -input in.1D -fwhm 8 -add_index         \
                  -output in_smh8.1D.dset 

         You can colorize the input and output data using ScaleToMap:
         ScaleToMap  -input in.1D 0 1 -cmap BGYR19       \
                     -clp MIN MAX > in.1D.col            \
         ScaleToMap  -input in_sm8.1D 0 1 -cmap BGYR19   \
                     -clp MIN MAX > in_sm8.1D.col        \

         For help on using ScaleToMap see ScaleToMap -help
         Note that the MIN MAX represent the minimum and maximum
         values in in.1D. You should keep them constant in both 
         commands in order to be able to compare the resultant colorfiles.
         You can import the .col files with the 'c' command in SUMA.

         You can send the data to SUMA with each iteration.
         To do so, start SUMA with these options:
         suma -spec quick.spec -niml &
         and add these options to SurfSmooth's command line above:
         -talk_suma -refresh_rate 5

   Sample commands lines for surface smoothing:
      SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met LM    \
                  -output NodeList_sm100.1D -Niter 100 -kpb 0.1   
         This command smoothes the surface's geometry. The smoothed
         node coordinates are written out to NodeList_sm100.1D. 

   Sample command for considerable surface smoothing and inflation
   back to original volume:
       SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met NN_geom \
                   -output NodeList_inflated_mvol.1D -Niter 1500 \
                   -match_vol 0.01
   Sample command for considerable surface smoothing and inflation
   back to original area:
       SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met NN_geom \
                   -output NodeList_inflated_marea.1D -Niter 1500 \
                   -match_area 0.01

   References: 
      (1) M.K. Chung et al.   Deformation-based surface morphometry
                              applied to gray matter deformation. 
                              Neuroimage 18 (2003) 198-213
          M.K. Chung   Statistical morphometry in computational
                       neuroanatomy. Ph.D. thesis, McGill Univ.,
                       Montreal, Canada
      (2) G. Taubin.       Mesh Signal Processing. 
                           Eurographics 2000.
      (3) M.K. Chung et al.  Cortical thickness analysis in autism 
                             via heat kernel smoothing. NeuroImage, 
                             submitted.(2005) 
             http://www.stat.wisc.edu/~mchung/papers/ni_heatkernel.pdf
      (4) M.K. Chung,  Heat kernel smoothing and its application to 
                       cortical manifolds. Technical Report 1090. 
                       Department of Statististics, U.W.Madison
             http://www.stat.wisc.edu/~mchung/papers/heatkernel_tech.pdf
   See Also:   
       ScaleToMap to colorize the output, however it is better
       to load surface datasets directly into SUMA and colorize
       them interactively.

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     



AFNI program: SurfToSurf

Usage: SurfToSurf <-i_TYPE S1> [<-sv SV1>]
                  <-i_TYPE S2> [<-sv SV1>]
                  [<-prefix PREFIX>]
                  [<-output_params PARAM_LIST>]
                  [<-node_indices NODE_INDICES>]
                  [<-proj_dir PROJ_DIR>]
                  [<-data DATA>]
                  [<-node_debug NODE>]
                  [<-debug DBG_LEVEL>]
                  [-make_consistent]
 
  This program is used to interpolate data from one surface (S2)
 to another (S1), assuming the surfaces are quite similar in
 shape but having different meshes (non-isotopic).
 This is done by projecting each node (nj) of S1 along the normal
 at nj and finding the closest triangle t of S2 that is intersected
 by this projection. Projection is actually bidirectional.
 If such a triangle t is found, the nodes (of S2) forming it are 
 considered to be the neighbors of nj.
 Values (arbitrary data, or coordinates) at these neighboring nodes
 are then transfered to nj using barycentric interpolation or 
 nearest-node interpolation.
 Nodes whose projections fail to intersect triangles in S2 are given
 nonsensical values of -1 and 0.0 in the output.

 Mandatory input:
  Two surfaces are required at input. See -i_TYPE options
  below for more information. 

 Optional input:
  -prefix PREFIX: Specify the prefix of the output file.
                  The output file is in 1D format at the moment.
                  Default is SurfToSurf
  -output_params PARAM_LIST: Specify the list of mapping
                             parameters to include in output
     PARAM_LIST can have any or all of the following:
        NearestTriangleNodes: Use Barycentric interpolation (default)
                              and output indices of 3 nodes from S2
                              that neighbor nj of S1
        NearestNode: Use only the closest node from S2 (of the three 
                     closest neighbors) to nj of S1 for interpolation
                     and output the index of that closest node.
        NearestTriangle: Output index of triangle t from S2 that
                         is the closest to nj along its projection
                         direction. 
        DistanceToSurf: Output distance (signed) from nj, along 
                        projection direction to S2.
                        This is the parameter output by the precursor
                        program CompareSurfaces
        ProjectionOnSurf: Output coordinates of projection of nj onto 
                          triangle t of S2.
        Data: Output the data from S2, interpolated onto S1
              If no data is specified via the -data option, then
              the XYZ coordinates of SO2's nodes are considered
              the data.
  -data DATA: 1D file containing data to be interpolated.
              Each row i contains data for node i of S2.
              You must have one row for each node making up S2.
              In other terms, if S2 has N nodes, you need N rows
              in DATA. 
              Each column of DATA is processed separately (think
              sub-bricks, and spatial interpolation).
              You can use [] selectors to choose a subset 
              of columns.
              If -data option is not specified and Data is in PARAM_LIST
              then the XYZ coordinates of SO2's nodes are the data.
  -node_indices NODE_INDICES: 1D file containing the indices of S1
                              to consider. The default is all of the
                              nodes in S1. Only one column of values is
                              allowed here, use [] selectors to choose
                              the column of node indices if NODE_INDICES
                              has multiple columns in it.
  -proj_dir PROJ_DIR: 1D file containing projection directions to use
                      instead of the node normals of S1.
                      Each row should contain one direction for each
                      of the nodes forming S1.
  -make_consistent: Force a consistency check and correct triangle 
                    orientation of S1 if needed. Triangles are also
                    oriented such that the majority of normals point
                    away from center of surface.
                    The program might not succeed in repairing some
                    meshes with inconsistent orientation.

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying a surface using -surf_? method:
    -surf_A SURFACE: specify the name of the first
            surface to load. If the program requires
            or allows multiple surfaces, use -surf_B
            ... -surf_Z .
            You need not use _A if only one surface is
            expected.
            SURFACE is the name of the surface as specified
            in the SPEC file. The use of -surf_ option 
            requires the use of -spec option.
 Specifying output surfaces using -o or -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
       byu: BYU format, ascii or binary.
       gii: GIFTI format, ascii.
            You can also enforce the encoding of data arrays
            by using gii_asc, gii_b64, or gii_b64gz for 
            ASCII, Base64, or Base64 Gzipped. 
            If AFNI_NIML_TEXT_DATA environment variable is set to YES, the
            the default encoding is ASCII, otherwise it is Base64.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -o option and let the programs guess
 the type from the extension.
   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov     
       Shruti Japee LBC/NIMH/NIH  shruti@codon.nih.gov 



AFNI program: SurfaceMetrics
--     Error Main_SUMA_SurfaceMetrics (SUMA_SurfaceMetrics.c:210):
Too few parameters

Usage: SurfaceMetrics <-Metric1> [[-Metric2] ...] 
                  <-SURF_1> 
                  [-tlrc] [<-prefix prefix>]

Outputs information about a surface's mesh

   -Metric1: Replace -Metric1 with the following:
      -vol: calculates the volume of a surface.
            Volume unit is the cube of your surface's
            coordinates unit, obviously.
            Volume's sign depends on the orientation
            of the surface's mesh.
            Make sure your surface is a closed one
            and that winding is consistent.
            Use SurfQual to check the surface.
            If your surface's mesh has problems,
            the result is incorrect. 
            Volume is calculated using Gauss's theorem,
            see [Hughes, S.W. et al. 'Application of a new 
            discreet form of Gauss's theorem for measuring 
            volume' in Phys. Med. Biol. 1996].
      -conv: output surface convexity at each node.
         Output file is prefix.conv. Results in two columns:
         Col.0: Node Index
         Col.1: Convexity
         This is the measure used to shade sulci and gyri in SUMA.
         C[i] = Sum(dj/dij) over all neighbors j of i
         dj is the distance of neighboring node j to the tangent plane at i
         dij is the length of the segment ij
      -closest_node XYZ_LIST.1D: Find the closest node on the surface
                              to each XYZ triplet in XYZ_LIST.1D
                              Note that it is assumed that the XYZ
                              coordinates are in RAI (DICOM) per AFNI's
                              coordinate convention. For correspondence
                              with coordinates observed in SUMA and AFNI
                              be sure to use the proper -sv parameter for
                              the surface and XYZ coordinates in question.
         Output file is prefix.closest.1D. Results in 8 columns:
         Col.0: Index of closest node.
         Col.1: Distance of closest node to XYZ reference point.
         Col.2..4: XYZ of reference point (same as XYZ_LIST.1D, copied 
                   here for clarity).
         Col.5..7: XYZ of closest node (after proper surface coordinate
                   transformation, including SurfaceVolume transform.
      -area: output area of each triangle. 
         Output file is prefix.area. Results in two columns:
         Col.0: Triangle Index
         Col.1: Triangle Area
      -tri_sines/-tri_cosines: (co)sine of angles at nodes forming
                                   triangles.
         Output file is prefix.(co)sine. Results in 4 columns:
         Col.0: Triangle Index
         Col.1: (co)sine of angle at node 0
         Col.2: (co)sine of angle at node 1
         Col.3: (co)sine of angle at node 2
      -tri_CoSines: Both cosines and sines.
      -tri_angles: Unsigned angles in radians of triangles.
         Col.0: Triangle Index
         Col.1: angle at node 0
         Col.2: angle at node 1
         Col.3: angle at node 2
      -node_angles: Unsigned angles in radians at nodes of surface.
         Col.0: Node Index
         Col.1: minimum angle at node 
         Col.2: maximum angle at node 
         Col.3: average angle at node 
      -curv: output curvature at each node.
         Output file is prefix.curv. Results in nine columns:
         Col.0: Node Index
         Col.1-3: vector of 1st principal direction of surface
         Col.4-6: vector of 2nd principal direction of surface
         Col.7: Curvature along T1
         Col.8: Curvature along T2
         Curvature algorithm by G. Taubin from: 
         'Estimating the tensor of curvature of surface 
         from a polyhedral approximation.'
      -edges: outputs info on each edge. 
         Output file is prefix.edges. Results in five columns:
         Col.0: Edge Index (into a SUMA structure).
         Col.1: Index of the first node forming the edge
         Col.2: Index of the second node forming the edge
         Col.3: Number of triangles containing edge
         Col.4: Length of edge.
      -node_normals: Outputs segments along node normals.
                     Segments begin at node and have a default
                     magnitude of 1. See option 'Alt+Ctrl+s' in 
                     SUMA for visualization.
      -face_normals: Outputs segments along triangle normals.
                     Segments begin at centroid of triangles and 
                     have a default magnitude of 1. See option 
                     'Alt+Ctrl+s' in SUMA for visualization.
      -normals_scale SCALE: Scale the normals by SCALE (1.0 default)
                     For use with options -node_normals and -face_normals
      -coords: Output coords of each node after any transformation 
         that is normally carried out by SUMA on such a surface.
         Col. 0: Node Index
         Col. 1: X
         Col. 2: Y
         Col. 3: Z
      -sph_coords: Output spherical coords of each node.
      -sph_coords_center x y z: Shift each node by  x y z
                                before calculating spherical
                                coordinates. Default is the
                                center of the surface.
          Both sph_coords options output the following:
          Col. 0: Node Index
          Col. 1: R (radius)
          Col. 2: T (azimuth)
          Col. 3: P (elevation)
      -boundary_nodes: Output nodes that form a boundary of a surface
                   i.e. they form edges that belong to one and only
                   one triangle.
      -internal_nodes: Output nodes that are not a boundary.
                   i.e. they form edges that belong to more than
                   one triangle.

      You can use any or all of these metrics simultaneously.

     (-SURF_1):  An option for specifying the surface.
                 (For option's syntax, see 'Specifying input surfaces'
                 section below).

   -sv SurfaceVolume [VolParam for sf surfaces]: Specify a surface volume
                   for surface alignment. See ConvertSurface -help for 
                   more info.

   -tlrc: Apply Talairach transform to surface.
                   See ConvertSurface -help for more info.

   -prefix prefix: Use prefix for output files. 
                   (default is prefix of inSurf)

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying a surface using -surf_? method:
    -surf_A SURFACE: specify the name of the first
            surface to load. If the program requires
            or allows multiple surfaces, use -surf_B
            ... -surf_Z .
            You need not use _A if only one surface is
            expected.
            SURFACE is the name of the surface as specified
            in the SPEC file. The use of -surf_ option 
            requires the use of -spec option.

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

       Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov 
       Mon May 19 15:41:12 EDT 2003




AFNI program: Vecwarp
Usage: Vecwarp [options]
Transforms (warps) a list of 3-vectors into another list of 3-vectors
according to the options.  Error messages, warnings, and informational
messages are written to stderr.  If a fatal error occurs, the program
exits with status 1; otherwise, it exits with status 0.

OPTIONS:
 -apar aaa   = Use the AFNI dataset 'aaa' as the source of the
               transformation; this dataset must be in +acpc
               or +tlrc coordinates, and must contain the
               attributes WARP_TYPE and WARP_DATA which describe
               the forward transformation from +orig coordinates
               to the 'aaa' coordinate system.
             N.B.: The +orig version of this dataset must also be
                   readable, since it is also needed when translating
                   vectors between SureFit and AFNI coordinates.
                   Only the .HEAD files are actually used.

 -matvec mmm = Read an affine transformation matrix-vector from file
               'mmm', which must be in the format
                   u11 u12 u13 v1
                   u21 u22 u23 v2
                   u31 u32 u33 v3
               where each 'uij' and 'vi' is a number.  The forward
               transformation is defined as
                   [ xout ]   [ u11 u12 u13 ] [ xin ]   [ v1 ]
                   [ yout ] = [ u21 u22 u23 ] [ yin ] + [ v2 ]
                   [ zout ]   [ u31 u32 u33 ] [ zin ]   [ v3 ]

 Exactly one of -apar or -matvec must be used to specify the
 transformation.

 -forward    = -forward means to apply the forward transformation;
   *OR*        -backward means to apply the backward transformation
 -backward     * For example, if the transformation is specified by
                  '-apar fred+tlrc', then the forward transformation
                  is from +orig to +tlrc coordinates, and the backward
                  transformation is from +tlrc to +orig coordinates.
               * If the transformation is specified by -matvec, then
                  the matrix-vector read in defines the forward
                  transform as above, and the backward transformation
                  is defined as the inverse.
               * If neither -forward nor -backward is given, then
                  -forward is the default.

 -input iii  = Read input 3-vectors from file 'iii' (from stdin if
               'iii' is '-' or the -input option is missing).  Input
               data may be in one of the following ASCII formats:

               * SureFit .coord files:
                   BeginHeader
                   lines of text ...
                   EndHeader
                   count
                   int x y z
                   int x y z
                   et cetera...
                 In this case, everything up to and including the
                 count is simply passed through to the output.  Each
                 (x,y,z) triple is transformed, and output with the
                 int label that precedes it.  Lines that cannot be
                 scanned as 1 int and 3 floats are treated as comments
                 and are passed to through to the output unchanged.
             N.B.-1: For those using SureFit surfaces created after
                     the SureFit/Caret merger (post. 2005), you need
                     to use the flag -new_surefit. Talk to Donna about
                     this!
             N.B.-2: SureFit coordinates are
                   x = distance Right    of Left-most      dataset corner
                   y = distance Anterior to Posterior-most dataset corner
                   z = distance Superior to Inferior-most  dataset corner
                 For example, if the transformation is specified by
                   -forward -apar fred+tlrc
                 then the input (x,y,z) are relative to fred+orig and the
                 output (x,y,z) are relative to fred+tlrc.  If instead
                   -backward -apar fred+tlrc
                 is used, then the input (x,y,z) are relative to fred+tlrc
                 and the output (x,y,z) are relative to fred+orig.
                 For this to work properly, not only fred+tlrc must be
                 readable by Vecwarp, but fred+orig must be as well.
                 If the transformation is specified by -matvec, then
                 the matrix-vector transformation is applied to the
                 (x,y,z) vectors directly, with no coordinate shifting.

               * AFNI .1D files with 3 columns
                   x y z
                   x y z
                   et cetera...
                 In this case, each (x,y,z) triple is transformed and
                 written to the output.  Lines that cannot be scanned
                 as 3 floats are treated as comments and are passed
                 through to the output unchanged.
               N.B.: AFNI (x,y,z) coordinates are in DICOM order:
                   -x = Right     +x = Left
                   -y = Anterior  +y = Posterior
                   -z = Inferior  +z = Superior

 -output ooo = Write the output to file 'ooo' (to stdout if 'ooo'
               is '-', or if the -output option is missing).  If the
               file already exists, it will not be overwritten unless
               the -force option is also used.

 -force      = If the output file already exists, -force can be
               used to overwrite it.  If you want to use -force,
               it must come before -output on the command line.

EXAMPLES:

  Vecwarp -apar fred+tlrc -input fred.orig.coord > fred.tlrc.coord

This transforms the vectors defined in original coordinates to
Talairach coordinates, using the transformation previously defined
by AFNI markers.

  Vecwarp -apar fred+tlrc -input fred.tlrc.coord -backward > fred.test.coord

This does the reverse transformation; fred.test.coord should differ from
fred.orig.coord only by roundoff error.

Author: RWCox - October 2001

++ Compile date = Mar 13 2009




AFNI program: Xphace
Usage: Xphace im1 [im2]
Interactive image mergerizing via FFTs.
Image files are in PGM or JPEG format.



AFNI program: abut
ABUT:  put noncontiguous FMRI slices together [for to3d]

method: put zero valued slices in the gaps, then
        replicate images to simulate thinner slices

Usage:
   abut [-dzin thickness] [-dzout thickness] [-root name]
        [-linear | -blocky] [-verbose] [-skip n+gap] ... images ...

   -dzin   the thickness value in mm;  if not given,
             taken to be 1.0 (in which case, the output
             thickness and gap sizes are simply relative
             to the slice thickness, rather than absolute)

   -dzout  the output slice thickness, usually smaller than
             the input thickness;  if not given, the program
             will compute a value (the smaller the ratio
             dzout/dzin is, the more slices will be output)

   -root   'name' is the root (or prefix) for the output
             slice filename;  for example, '-root fred.'
             will result in files fred.0001, fred.0002, ...

   -linear if present, this flag indicates that subdivided slices
             will be linearly interpolated rather than simply
             replicated -- this will make the results smoother
             in the through-slice direction (if dzout < dzin)

   -blocky similar to -linear, but uses AFNI's 'blocky' interpolation
             when possible to put out intermediate slices.
             Both interpolation options only apply when dzout < dzin
             and when an output slice has a non-gappy neighbor.

   -skip   'n+gap' indicates that a gap is to be inserted
             between input slices #n and #n+1, where n=1,2,...;
             for example, -skip 6+5.5 means put a gap of 5.5 mm
             between slices 6 and 7.

   More than one -skip option is allowed.  They must all occur
   before the list of input image filenames.



AFNI program: adwarp
++ adwarp: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: R. W. Cox and B. D. Ward
Usage: adwarp [options]
Resamples a 'data parent' dataset to the grid defined by an
'anat parent' dataset.  The anat parent dataset must contain
in its .HEAD file the coordinate transformation (warp) needed
to bring the data parent dataset to the output grid.  This
program provides a batch implementation of the interactive
AFNI 'Write' buttons, one dataset at a time.

  Example: adwarp -apar anat+tlrc -dpar func+orig

  This will create dataset func+tlrc (.HEAD and .BRIK).

Options (so to speak):
----------------------
-apar aset  = Set the anat parent dataset to 'aset'.  This
                is a nonoptional option (must be present).

-dpar dset  = Set the data parent dataset to 'dset'.  This
                is a nonoptional option (must be present).
              Note: dset may contain a sub-brick selector,
              e.g.,  -dpar 'dset+orig[2,5,7]'             

-prefix ppp = Set the prefix for the output dataset to 'ppp'.
                The default is the prefix of 'dset'.

-dxyz ddd   = Set the grid spacing in the output datset to
                'ddd' mm.  The default is 1 mm.

-verbose    = Print out progress reports.
-force      = Write out result even if it means deleting
                an existing dataset.  The default is not
                to overwrite.

-resam rrr  = Set resampling mode to 'rrr' for all sub-bricks
                     --- OR ---                              
-thr   rrr  = Set resampling mode to 'rrr' for threshold sub-bricks
-func  rrr  = Set resampling mode to 'rrr' for functional sub-bricks

The resampling mode 'rrr' must be one of the following:
                 NN = Nearest Neighbor
                 Li = Linear Interpolation
                 Cu = Cubic Interpolation
                 Bk = Blocky Interpolation

NOTE:  The default resampling mode is Li for all sub-bricks. 

++ Compile date = Mar 13 2009




AFNI program: afni
GPL AFNI: Analysis of Functional NeuroImages, by RW Cox (rwcox@nih.gov)
This is Version AFNI_2008_07_18_1710
[[Precompiled binary linux_gcc32: Mar 13 2009]]

 ** This software was designed to be used only for research purposes. **
 ** Clinical uses are not recommended, and have never been evaluated. **
 ** This software comes with no warranties of any kind whatsoever,    **
 ** and may not be useful for anything.  Use it at your own risk!     **
 ** If these terms are not acceptable, you aren't allowed to use AFNI.**
 ** See 'Define Datamode->Misc->License Info' for more details.       **

 **** If you DO find AFNI useful, please cite this paper:
    RW Cox. AFNI: Software for analysis and visualization of
    functional magnetic resonance neuroimages.
    Computers and Biomedical Research, 29:162-173, 1996.

 **** If you find SUMA useful, citing this paper also would be nice:
    ZS Saad, RC Reynolds, B Argall, S Japee, RW Cox.
    2004 2nd IEEE International Symposium on Biomedical Imaging:
    Macro to Nano 2, 1510-1513, 2004.

----------------------------------------------------------------
USAGE 1: read in sessions of 3D datasets (created by to3d, etc.)
----------------------------------------------------------------
   afni [options] [session_directory ...]

   -purge       Conserve memory by purging data to disk.
                  [Use this if you run out of memory when running AFNI.]
                  [This will slow the code down, so use only if needed.]
   -posfunc     Set up the color 'pbar' to use only positive function values.
   -R           Recursively search each session_directory for more session
                  subdirectories.
       WARNING: This will descend the entire filesystem hierarchy from
                  each session_directory given on the command line.  On a
                  large disk, this may take a long time.  To limit the
                  recursion to 5 levels (for example), use -R5.
   -ignore N    Tells the program to 'ignore' the first N points in
                  time series for graphs and FIM calculations.
   -im1 N       Tells the program to use image N as the first one for
                  graphs and FIM calculations (same as '-ignore N-1')
   -tlrc_small  These options set whether to use the 'small' or 'big'
   -tlrc_big      Talairach brick size.  The compiled in default for
                  the program is now 'big', unlike AFNI 1.0x.
   -no1D        Tells AFNI not to read *.1D timeseries files from
                  the dataset directories.  The *.1D files in the
                  directories listed in the AFNI_TSPATH environment
                  variable will still be read (if this variable is
                  not set, then './' will be scanned for *.1D files.)

   -noqual      Tells AFNI not to enforce the 'quality' checks when
                  making the transformations to +acpc and +tlrc.
   -unique      Tells the program to create a unique set of colors
                  for each AFNI controller window.  This allows
                  different datasets to be viewed with different
                  grayscales or colorscales.  Note that -unique
                  will only work on displays that support 12 bit
                  PseudoColor (e.g., SGI workstations) or TrueColor.
   -orient code Tells afni the orientation in which to display
                  x-y-z coordinates (upper left of control window).
                  The code must be 3 letters, one each from the
                  pairs {R,L} {A,P} {I,S}.  The first letter gives
                  the orientation of the x-axis, the second the
                  orientation of the y-axis, the third the z-axis:
                   R = right-to-left         L = left-to-right
                   A = anterior-to-posterior P = posterior-to-anterior
                   I = inferior-to-superior  S = superior-to-inferior
                  The default code is RAI ==> DICOM order.  This can
                  be set with the environment variable AFNI_ORIENT.
                  As a special case, using the code 'flipped' is
                  equivalent to 'LPI' (this is for Steve Rao).
   -noplugins   Tells the program not to load plugins.
                  (Plugins can also be disabled by setting the
                   environment variable AFNI_NOPLUGINS.)
   -yesplugouts Tells the program to listen for plugouts.
                  (Plugouts can also be enabled by setting the
                   environment variable AFNI_YESPLUGOUTS.)
   -YESplugouts Makes the plugout code print out lots of messages
                  (useful for debugging a new plugout).
   -noplugouts  Tells the program NOT to listen for plugouts.
                  (This option is available to override
                   the AFNI_YESPLUGOUTS environment variable.)
   -skip_afnirc Tells the program NOT to read the file .afnirc
                  in the home directory.  See README.setup for
                  details on the use of .afnirc for initialization.
   -layout fn   Tells AFNI to read the initial windows layout from
                  file 'fn'.  If this option is not given, then
                  environment variable AFNI_LAYOUT_FILE is used.
                  If neither is present, then AFNI will do whatever
                  it feels like.

   -niml        If present, turns on listening for NIML-formatted
                  data from SUMA.  Can also be turned on by setting
                  environment variable AFNI_NIML_START to YES.
   -np port     If present, sets the NIML socket port number to 'port'.
                  This must be an integer between 1024 and 65535,
                  and must be the same as the '-np port' number given
                  to SUMA.  [default = 53211]

   -com ccc     This option lets you specify 'command strings' to
                  drive AFNI after the program startup is completed.
                  Legal command strings are described in the file
                  README.driver.  More than one '-com' option can
                  be used, and the commands will be executed in
                  the order they are given on the command line.
            N.B.: Most commands to AFNI contain spaces, so the 'ccc'
                  command strings will need to be enclosed in quotes.
   -comsep 'c'  Use character 'c' as a separator for commands.
                  In this way, you can put multiple commands in
                  a single '-com' option.  Default separator is ';'.
            N.B.: The command separator CANNOT be alphabetic or
                  numeric (a..z, A..Z, 0..9) or whitespace or a quote!
            N.B.: -comsep should come BEFORE any -com option that
                  uses a non-semicolon separator!
   Example: -com 'OPEN_WINDOW axialimage; SAVE_JPEG axialimage zork; QUIT'
   N.B.: You can also put startup commands (one per line) in
         the file '~/.afni.startup_script'.  For example,
            OPEN_WINDOW axialimage
         to always open the axial image window on startup.

 * If no session_directories are given, then the program will use
    the current working directory (i.e., './').
 * The maximum number of sessions is now set to  80.
 * The maximum number of datasets per session is 4096.
 * To change these maximums, you must edit file '3ddata.h' and then
    recompile this program.

-----------------------------------------------------
USAGE 2: read in images for 'quick and dirty' viewing
-----------------------------------------------------
(Most advanced features of AFNI will be disabled.)

   afni -im [options] im1 im2 im3 ...

   -im          Flag to read in images instead of 3D datasets
                  (Talaraich and functional stuff won't work)
   -dy yratio   Tells afni the downscreen pixel size is 'yratio' times
                  the across-screen (x) pixel dimension (default=1.0)
   -dz zratio   Tells afni the slice thickness is 'zratio' times
                  the x pixel dimension (default=1.0)
   -orient code Tells afni the orientation of the input images.
                  The code must be 3 letters, one each from the
                  pairs {R,L} {A,P} {I,S}.  The first letter gives
                  the orientation of the x-axis, the second the
                  orientation of the y-axis, the third the z-axis:
                   R = right-to-left         L = left-to-right
                   A = anterior-to-posterior P = posterior-to-anterior
                   I = inferior-to-superior  S = superior-to-inferior
                  (the default code is ASL ==> sagittal images).
                  Note that this use of '-orient' is different from
                  the use when viewing datasets.
   -resize      Tells afni that all images should be resized to fit
                  the size of the first one, if they don't already fit
                  (by default, images must all 'fit' or afni will stop)
   -datum type  Tells afni to convert input images into the type given:
                  byte, short, float, complex are the legal types.
 The image files (im1 ...) are the same formats as accepted by to3d.

 New image display options (alternatives to -im) [19 Oct 1999]:
   -tim         These options tell AFNI to arrange the input images
   -tim:nt      into a internal time-dependent dataset.  Suppose that
   -zim:nz      there are N input 2D slices on the command line.
              * -tim alone means these are N points in time (1 slice).
              * -tim:nt means there are nt points in time (nt is
                  an integer > 1), so there are N/nt slices in space,
                  and the images on the command line are input in
                  time order first (like -time:tz in to3d).
              * -zim:nz means there are nz slices in space (nz is
                  an integer > 1), so there are N/nz points in time,
                  and the images on the command line are input in
                  slice order first (like -time:zt in to3d).

 N.B.: You may wish to use the -ignore option to set the number of
        initial points to ignore in the time series graph if you use
        -tim or -zim, since there is no way to change this from
        within an AFNI run (the FIM menus are disabled).
 N.B.: The program 'aiv' (AFNI image viewer) can also be used to
        look at images.

-------------------------------------------------------
USAGE 3: read in datasets specified on the command line
-------------------------------------------------------

  afni -dset [options] dname1 dname2 ...

where 'dname1' is the name of a dataset, etc.  With this option, only
the chosen datasets are read in, and they are all put in the same
'session'.  Follower datasets are not created.

INPUT DATASET NAMES
-------------------
 An input dataset is specified using one of these forms:
    'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
 You can also add a sub-brick selection list after the end of the
 dataset name.  This allows only a subset of the sub-bricks to be
 read in (by default, all of a dataset's sub-bricks are input).
 A sub-brick selection list looks like one of the following forms:
   fred+orig[5]                     ==> use only sub-brick #5
   fred+orig[5,9,17]                ==> use #5, #9, and #17
   fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
   fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
 Sub-brick indexes start at 0.  You can use the character '$'
 to indicate the last sub-brick in a dataset; for example, you
 can select every third sub-brick by using the selection list
   fred+orig[0..$(3)]

 N.B.: The sub-bricks are read in the order specified, which may
 not be the order in the original dataset.  For example, using
   fred+orig[0..$(2),1..$(2)]
 will cause the sub-bricks in fred+orig to be input into memory
 in an interleaved fashion.  Using
   fred+orig[$..0]
 will reverse the order of the sub-bricks.

 N.B.: You may also use the syntax  after the name of an input 
 dataset to restrict the range of values read in to the numerical
 values in a..b, inclusive.  For example,
    fred+orig[5..7]<100..200>
 creates a 3 sub-brick dataset with values less than 100 or
 greater than 200 from the original set to zero.
 If you use the <> sub-range selection without the [] sub-brick
 selection, it is the same as if you had put [0..$] in front of
 the sub-range selection.

 N.B.: Datasets using sub-brick/sub-range selectors are treated as:
  - 3D+time if the dataset is 3D+time and more than 1 brick is chosen
  - otherwise, as bucket datasets (-abuc or -fbuc)
    (in particular, fico, fitt, etc datasets are converted to fbuc!)

 N.B.: The characters '$ ( ) [ ] < >'  are special to the shell,
 so you will have to escape them.  This is most easily done by
 putting the entire dataset plus selection list inside forward
 single quotes, as in 'fred+orig[5..7,9]', or double quotes "x".

CALCULATED DATASETS
-------------------
 Datasets may also be specified as runtime-generated results from
 program 3dcalc.  This type of dataset specifier is enclosed in
 quotes, and starts with the string '3dcalc(':
    '3dcalc( opt opt ... opt )'
 where each 'opt' is an option to program 3dcalc; this program
 is run to generate a dataset in the directory given by environment
 variable TMPDIR (default=/tmp).  This dataset is then read into
 memory, locked in place, and deleted from disk.  For example
    afni -dset '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'
 will let you look at the average of datasets r1+orig and r2+orig.
 N.B.: using this dataset input method will use lots of memory!

-------------------------------
GENERAL OPTIONS (for any usage)
-------------------------------

   -q           Tells afni to be 'quiet' on startup
   -Dname=val   Sets environment variable 'name' to 'val' inside AFNI;
                  will supersede any value set in .afnirc.
   -gamma gg    Tells afni that the gamma correction factor for the
                  monitor is 'gg' (default gg is 1.0; greater than
                  1.0 makes the image contrast larger -- this may
                  also be adjusted interactively)
   -install     Tells afni to install a new X11 Colormap.  This only
                  means something for PseudoColor displays.  Also, it
                  usually cause the notorious 'technicolor' effect.
   -ncolors nn  Tells afni to use 'nn' gray levels for the image
                  displays (default is 80)
   -xtwarns     Tells afni to show any Xt warning messages that may
                  occur; the default is to suppress these messages.
   -XTWARNS     Trigger a debug trace when an Xt warning happens.
   -tbar name   Uses 'name' instead of 'AFNI' in window titlebars.
   -flipim and  The '-flipim' option tells afni to display images in the
   -noflipim      'flipped' radiology convention (left on the right).
                  The '-noflipim' option tells afni to display left on
                  the left, as neuroscientists generally prefer.  This
                  latter mode can also be set by the Unix environment
                  variable 'AFNI_LEFT_IS_LEFT'.  The '-flipim' mode is
                  the default.
   -trace       Turns routine call tracing on, for debugging purposes.
   -TRACE       Turns even more verbose tracing on, for more debugging.
   -nomall      Disables use of the mcw_malloc() library routines.
   -motif_ver   Show the applied motif version string.

N.B.: Many of these options, as well as the initial color set up,
      can be controlled by appropriate X11 resources.  See the
      files AFNI.Xdefaults and README.environment for instructions
      and examples.  For more help on all AFNI programs, see
        http://afni.nimh.nih.gov/afni/doc/program_help/index.html

--------------------------------------
Educational and Informational Material
--------------------------------------
The presentations used in our AFNI teaching classes at the NIH can
all be found at
 http://afni.nimh.nih.gov/pub/dist/edu/latest/      (PowerPoint directories)
 http://afni.nimh.nih.gov/pub/dist/edu/latest/afni_handouts/ (PDF directory)
And for the interactive AFNI program in particular, see
 http://afni.nimh.nih.gov/pub/dist/edu/latest/afni01_intro/afni01_intro.pdf
 http://afni.nimh.nih.gov/pub/dist/edu/latest/afni03_interactive/afni03_interactive.pdf
For help with AFNI problems, and to keep up with AFNI news, please use the
AFNI Message Board:
 http://afni.nimh.nih.gov/afni/community/board/
If an AFNI program crashes, please include the EXACT error messages it outputs
in your message board posting, as well as any other information needed to
reproduce the problem.  Just saying 'program X crashed, what's the issue?'
is not helpful!

For some fun, see this image:
 

http://afni.nimh.nih.gov/pub/dist/doc/program_help/images/afni_splashes.gif


-----------------------------------------
REFERENCES and some light bedtime reading
-----------------------------------------
The following papers describe some of the components of the AFNI package.

RW Cox.  AFNI: Software for analysis and visualization of functional
  magnetic resonance neuroimages.  Computers and Biomedical Research,
  29: 162-173, 1996.

  * The first AFNI paper, and the one I prefer you cite if you want to
    refer to the AFNI package as a whole.
 ** http://afni.nimh.nih.gov/sscc/rwcox/papers/CBM_1996.pdf

RW Cox, A Jesmanowicz, and JS Hyde.  Real-time functional magnetic
  resonance imaging.  Magnetic Resonance in Medicine, 33: 230-236, 1995.

  * The first paper on realtime FMRI; describes the algorithm used in
    3dfim+, the interactive FIM calculations, and in the realtime plugin.
  * http://afni.nimh.nih.gov/sscc/rwcox/papers/Realtime_FMRI.pdf

RW Cox and JS Hyde.  Software tools for analysis and visualization of
  FMRI Data.  NMR in Biomedicine, 10: 171-178, 1997.

  * A second paper about AFNI and design issues for FMRI software tools.

RW Cox and A Jesmanowicz.  Real-time 3D image registration for
  functional MRI.  Magnetic Resonance in Medicine, 42: 1014-1018, 1999.

  * Describes the algorithm used for image registration in 3dvolreg
    and in the realtime plugin.
  * I think the first paper to demonstrate realtime MRI volume image
    registration running on a standard workstation (not a supercomputer).
  * http://afni.nimh.nih.gov/sscc/rwcox/papers/RealtimeRegistration.pdf

ZS Saad, KM Ropella, RW Cox, and EA DeYoe.  Analysis and use of FMRI
  response delays.  Human Brain Mapping, 13: 74-93, 2001.

  * Describes the algorithm used in 3ddelay (cf. '3ddelay -help').
  * http://afni.nimh.nih.gov/sscc/rwcox/papers/Delays2001.pdf

ZS Saad, G Chen, RC Reynolds, PP Christidis, KR Hammett, PSF Bellgowan,
  and RW Cox.  FIAC Analysis According to AFNI and SUMA.
  Human Brain Mapping, 27: 417-424, 2006.

  * Describes how we used AFNI to analyze the FIAC contest data.
  * http://dx.doi.org/10.1002/hbm.20247
  * http://afni.nimh.nih.gov/sscc/rwcox/papers/FIAC_AFNI_2006.pdf

ZS Saad, DR Glen, G Chen, MS Beauchamp, R Desai, RW Cox.
  A new method for improving functional-to-structural MRI alignment
  using local Pearson correlation.  NeuroImage 44: 839-848, 2009.

  * Describes the algorithm used in 3dAllineate (and thence in
    align_epi_anat.py) for EPI-to-structural volume image registration.
  * http://dx.doi.org/10.1016/j.neuroimage.2008.09.037
  * http://afni.nimh.nih.gov/sscc/rwcox/papers/LocalPearson2009.pdf

POSTERS on varied subjects from the AFNI development group can be found at
  * http://afni.nimh.nih.gov/sscc/posters

++ Compile date = Mar 13 2009




AFNI program: afni_history
afni_history:           show AFNI updates per user, dates or levels

This program is meant to display a log of updates to AFNI code, the
website, educational material, etc.  Users can specify a level of
importance, the author, program or how recent the changes are.

The levels of importance go from 1 to 4, with meanings:
       1 - users would not care
       2 - of little importance, though some users might care
       3 - fairly important
       4 - a big change or new program
       5 - IMPORTANT: we expect users to know

-----------------------------------------------------------------

common examples:

  0. get help

     a. afni_history -help

  1. display all of the history, possibly subject to recent days/entries

     a. afni_history
     b. afni_history -past_days 5
     c. afni_history -past_months 6
     d. afni_history -past_entries 1

  2. select a specific type, level or minimum level

     a. afni_history -level 2
     b. afni_history -min_level 3 -type BUG_FIX
     c. afni_history -type 1 -min_level 3 -past_years 1

  3. select a specific author or program

     a. afni_history -author rickr
     b. afni_history -program afni_proc.py

  4. select level 3+ suma updates from ziad over the past year

     a. afni_history -author ziad -min_level 3 -program suma

  5. generate a web-page, maybe from the past year at at a minimum level

     a. afni_history -html -reverse > afni_hist_all.html
     b. afni_history -html -reverse -min_level 2  > afni_hist_level2.html
     c. afni_history -html -reverse -min_level 3  > afni_hist_level3.html
     d. afni_history -html -reverse -min_level 4  > afni_hist_level4.html

-----------------------------------------------------------------

informational options: 

  -help                    : show this help
  -hist                    : show this program's history
  -list_authors            : show the list of valid authors
  -list_types              : show the list of valid change types
  -ver                     : show this program's version


output restriction options: 

  -author AUTHOR           : restrict output to the given AUTHOR
  -level LEVEL             : restrict output to the given LEVEL
  -min_level LEVEL         : restrict output to at least level LEVEL
  -program PROGRAM         : restrict output to the given PROGRAM

  -past_entries ENTRIES    : restrict output to final ENTRIES entries
  -past_days DAYS          : restrict output to the past DAYS days
  -past_months MONTHS      : restrict output to the past MONTHS months
  -past_years YEARS        : restrict output to the past YEARS years

  -type TYPE               : restrict output to the given TYPE
                             (TYPE = 0..5, or strings 'NEW_PROG', etc.)
                             e.g.  -type NEW_ENV
                             e.g.  -type BUG_FIX

general options: 

  -html                    : add html formatting
  -reverse                 : reverse the sorting order
                             (sort is by date, author, level, program)
  -verb LEVEL              : request verbose output
                             (LEVEL is from 0-6)


                                           Author: Rick Reynolds
                                           Thanks to: Ziad, Bob




AFNI program: afni_proc.py

    ===========================================================================
    afni_proc.py        - generate a tcsh script for an AFNI process stream

    This python script can generate a processing script via a command-line
    interface, with an optional question/answer session (-ask_me), or by a tk
    GUI (eventually).

    The user should provide at least the input datasets (-dsets) and stimulus
    files (-regress_stim_*), in order to create an output script.  See the
    'DEFAULTS' section for a description of the default options for each block.

    The output script, when executed will create a results directory, copy
    input files into it, and perform all processing there.  So the user can
    delete the results directory and re-run the script at their whim.

    Note that the user need not actually run the output script.  The user
    should feel free to modify the script for their own evil purposes, before
    running it.

    The text interface can be accessed via the -ask_me option.  It invokes a
    question & answer session, during which this program sets user options on
    the fly.  The user may elect to enter some of the options on the command
    line, even if using -ask_me.  See "-ask_me EXAMPLES", below.

    --------------------------------------------------
    TIMING FILE NOTE:

    One issue that the user must be sure of is the timing of the stimulus
    files (whether -regress_stim_files or -regress_stim_times is used).

    The 'tcat' step will remove the number of pre-steady-state TRs that the
    user specifies (defaulting to 0).  The stimulus files, provided by the
    user, must match datasets that have had such TRs removed (i.e. the stim
    files should start _after_ steady state has been reached).

    --------------------------------------------------
    MASKING NOTE:

    The 'default' operation of afni_proc.py is to apply a union of 3dAutomask
    datasets to the EPI data.

                ** This is no longer recommended. **

    --> Recommended: keep the 'mask' block, but apply '-regress_no_mask'.

    It seems much better not to mask the regression data in the single-subject
    analysis at all, send _all_ of the results to group space, and apply an
    anatomically-based mask there.  That could be computed from the @auto_tlrc
    reference dataset or from the average of skull-stripped subject anatomies.

    Since subjects have varying degrees of signal dropout in valid brain areas
    of the EPI data, the resulting intersection mask that would be requied in
    group space may exclude edge regions that people may be interested in.

    Also, it is helpful to see if much 'activation' appears outside the brain.
    This could be due to scanner or interpolation artifacts, and is useful to
    note, rather than to simply mask out and never see.

    Rather than letting 3dAutomask decide which brain areas should not be 
    considered valid, create a mask based on the anatomy _after_ the results
    have been warped to a standard group space.  Then perhaps dilate the mask
    by one voxel.  Example #11 from '3dcalc -help' shows how one might dilate.

    ---

    However since a mask dataset is necessary when computing blur estimates
    from the epi and errts datasets, the 'mask' block should be left in the
    analysis stream.  To refrain from applying it in the 'scale' and 'regress'
    blocks, add the '-regress_no_mask' option.

    ---

    Note that if no mask were applied in the 'scaling' step, large percent
    changes could result.  Because large values would be a detriment to the
    numerical resolution of the scaled short data, the default is to truncate
    scaled values at 200 (percent), which should not occur in the brain.

    See also -blocks, -regress_no_mask, -regress_est_blur_epits and
    -regress_est_blur_errts.

    --------------------------------------------------
    NOTE on having runs of different lengths:

    In the case that the EPI datasets are not all of the same length, here
    are some issues that may come up, listed by relevant option:

        -volreg_align_to        If aligning to "last" afni_proc.py might get
                                an inaccurate index for the volreg -base.

        -regress_polort         If this option is not used, then the degree of
                                polynomial used for the baseline will come from
                                the first run.

        -regress_est_blur_epits This may fail, as afni_proc.py may have trouble
                                teasing the different runs apart from the errts
                                dataset.

        -regress_use_stim_files This may fail, as make_stim_times.py is not
                                currently prepared to handle runs of different
                                lengths.
    --------------------------------------------------
    PROCESSING STEPS (of the output script):

    The output script will go through the following steps, unless the user
    specifies otherwise.

    automatic steps (the tcsh script will always perform these):

        setup       : check subject arg, set run list, create output dir, and
                      copy stim files
        tcat        : copy input datasets and remove unwanted initial TRs

    default steps (the user may skip these, or alter their order):

        tshift      : slice timing alignment on volumes (default is -time 0)
        volreg      : volume registration (default to third volume)
        blur        : blur each volume (default is 4mm fwhm)
        mask        : create a 'brain' mask from the EPI data (dilate 1 voxel)
        scale       : scale each run mean to 100, for each voxel (max of 200)
        regress     : regression analysis (default is GAM, peak 1, with motion
                      params)

    optional steps (the default is _not_ to apply these blocks)

        despike     : truncate spikes in each voxel's time series
        empty       : placehold for some user command (using 3dTcat as sample)

    --------------------------------------------------
    EXAMPLES (options can be provided in any order):

        1. Minimum use, provide datasets and stim files (or stim_times files).
           Note that a dataset suffix (e.g. HEAD) must be used with wildcards,
           so that datasets are not applied twice.  In this case, a stim_file
           with many columns is given, allowing the script to change it to
           stim_times files.

                afni_proc.py -dsets epiRT*.HEAD              \
                             -regress_stim_files stims.1D

           or without any wildcard, the .HEAD suffix is not needed:

                afni_proc.py -dsets epiRT_r1+orig epiRT_r2+orig epiRT_r3+orig \
                             -regress_stim_files stims.1D

     ** The following examples can be run from the AFNI_data2 directory, and
        are examples of how one might process the data for subject ED.

        Because the stimuli are on a 1-second grid, while the EPI data is on a
        2-second grid (TR = 2.0), we ran make_stim_times.py to generate the
        stim_times files (which are now distributed in AFNI_data2) as follows:

            make_stim_times.py -prefix stim_times -tr 1.0 -nruns 10 -nt 272 \
                   -files misc_files/all_stims.1D

        If your AFNI_data2 directory does not have misc_files/stim_times.*,
        then you can run the make_stim_times.py command from AFNI_data2.


        2. This example shows basic usage, with the default GAM regressor.
           We specify the output script name, the subject ID, removal of the
           first 2 TRs of each run (before steady state), and volume alignment
           to the end of the runs (the anat was acquired after the EPI).

           The script name will default to proc.ED, based on -subj_id.

                afni_proc.py -dsets ED/ED_r??+orig.HEAD      \
                             -subj_id ED                     \
                             -tcat_remove_first_trs 2        \
                             -volreg_align_to first          \
                             -regress_stim_times misc_files/stim_times.*.1D

        3. Similar to #2, but add labels for the 4 stim types, and apply TENT
           as the basis function to get 14 seconds of response, on a 2-second
           TR grid.  Also, copy the anat dataset(s) to the results directory,
           and align volumes to the third TR, instead of the first.

                afni_proc.py -dsets ED/ED_r??+orig.HEAD                      \
                             -subj_id ED.8                                   \
                             -copy_anat ED/EDspgr                            \
                             -tcat_remove_first_trs 2                        \
                             -volreg_align_to third                          \
                             -regress_stim_times misc_files/stim_times.*.1D  \
                             -regress_stim_labels ToolMovie HumanMovie       \
                                                  ToolPoint HumanPoint       \
                             -regress_basis 'TENT(0,14,8)'

        4. This is the current AFNI_data2 class example.

           Similar to #3, but append a single -regress_opts_3dD option to
           include contrasts.  The intention is to create a script very much
           like analyze_ht05.  Note that the contrast files have been renamed
           from contrast*.1D to glt*.txt, though the contents have not changed.

           afni_proc.py -dsets ED/ED_r??+orig.HEAD                         \
                  -subj_id ED.8.glt                                        \
                  -copy_anat ED/EDspgr                                     \
                  -tcat_remove_first_trs 2                                 \
                  -volreg_align_to third                                   \
                  -regress_stim_times misc_files/stim_times.*.1D           \
                  -regress_stim_labels ToolMovie HumanMovie                \
                                       ToolPoint HumanPoint                \
                  -regress_basis 'TENT(0,14,8)'                            \
                  -regress_opts_3dD                                        \
                      -gltsym ../misc_files/glt1.txt -glt_label 1 FullF    \
                      -gltsym ../misc_files/glt2.txt -glt_label 2 HvsT     \
                      -gltsym ../misc_files/glt3.txt -glt_label 3 MvsP     \
                      -gltsym ../misc_files/glt4.txt -glt_label 4 HMvsHP   \
                      -gltsym ../misc_files/glt5.txt -glt_label 5 TMvsTP   \
                      -gltsym ../misc_files/glt6.txt -glt_label 6 HPvsTP   \
                      -gltsym ../misc_files/glt7.txt -glt_label 7 HMvsTM

        5. Similar to #4, but replace some glt files with SYM, and request
           to run @auto_tlrc.

           Also, compute estimates of the smoothness in both the EPI (all_runs)
           and errts (via -regress_est_blur_*).

           afni_proc.py -dsets ED/ED_r??+orig.HEAD                           \
              -subj_id ED.8.gltsym                                           \
              -copy_anat ED/EDspgr                                           \
              -tlrc_anat                                                     \
              -tcat_remove_first_trs 2                                       \
              -volreg_align_to third                                         \
              -regress_stim_times misc_files/stim_times.*.1D                 \
              -regress_stim_labels ToolMovie HumanMovie                      \
                                   ToolPoint HumanPoint                      \
              -regress_basis 'TENT(0,14,8)'                                  \
              -regress_opts_3dD                                              \
                -gltsym 'SYM: -ToolMovie +HumanMovie -ToolPoint +HumanPoint' \
                -glt_label 1 HvsT                                            \
                -gltsym 'SYM: +HumanMovie -HumanPoint'                       \
                -glt_label 2 HMvsHP                                          \
              -regress_est_blur_epits                                        \
              -regress_est_blur_errts

        6. Similar to #3, but find the response for the TENT functions on a
           1-second grid, such as how the data is processed in the class
           script, s1.analyze_ht05.  This is similar to using '-stim_nptr 2',
           and requires the addition of 3dDeconvolve option '-TR_times 1.0' to  
           see the -iresp output on a 1.0 second grid.

                afni_proc.py -dsets ED/ED_r??+orig.HEAD                      \
                             -subj_id ED.15                                  \
                             -copy_anat ED/EDspgr                            \
                             -tcat_remove_first_trs 2                        \
                             -volreg_align_to third                          \
                             -regress_stim_times misc_files/stim_times.*.1D  \
                             -regress_stim_labels ToolMovie HumanMovie       \
                                                  ToolPoint HumanPoint       \
                             -regress_basis 'TENT(0,14,15)'                  \
                             -regress_opts_3dD -TR_times 1.0

        7. Similar to #2, but add the despike block, and skip the tshift and
           mask blocks (so the others must be specified).  The user wants to
           apply a block that afni_proc.py does not deal with, putting it after
           the 'despike' block.  So 'empty' is given after 'despike'.

           Also, apply a 4 second BLOCK response function, prevent the output
           of a fit time series dataset, run @auto_tlrc at the end, and specify
           an output script name.

                afni_proc.py -dsets ED/ED_r??+orig.HEAD                   \
                         -blocks despike empty volreg blur scale regress  \
                         -script process_ED.b4                            \
                         -subj_id ED.b4                                   \
                         -copy_anat ED/EDspgr                             \
                         -tlrc_anat                                       \
                         -tcat_remove_first_trs 2                         \
                         -volreg_align_to third                           \
                         -regress_stim_times misc_files/stim_times.*.1D   \
                         -regress_basis 'BLOCK(4,1)'                      \
                         -regress_no_fitts

    --------------------------------------------------
    -ask_me EXAMPLES:

        a1. Apply -ask_me in the most basic form, with no other options.

                afni_proc.py -ask_me

        a2. Supply input datasets.

                afni_proc.py -ask_me -dsets ED/ED_r*.HEAD

        a3. Same as a2, but supply the datasets in expanded form.
            No suffix (.HEAD) is needed when wildcards are not used.

                afni_proc.py -ask_me                          \
                     -dsets ED/ED_r01+orig ED/ED_r02+orig     \
                            ED/ED_r03+orig ED/ED_r04+orig     \
                            ED/ED_r05+orig ED/ED_r06+orig     \
                            ED/ED_r07+orig ED/ED_r08+orig     \
                            ED/ED_r09+orig ED/ED_r10+orig

        a4. Supply datasets, stim_times files and labels.

                afni_proc.py -ask_me                                    \
                        -dsets ED/ED_r*.HEAD                            \
                        -regress_stim_times misc_files/stim_times.*.1D  \
                        -regress_stim_labels ToolMovie HumanMovie       \
                                             ToolPoint HumanPoint

    --------------------------------------------------
    DEFAULTS: basic defaults for each block (not all defaults)

        setup:    - use 'SUBJ' for the subject id
                        (option: -subj_id SUBJ)
                  - create a t-shell script called 'proc_subj'
                        (option: -script proc_subj)
                  - use results directory 'SUBJ.results'
                        (option: -out_dir SUBJ.results)

        tcat:     - do not remove any of the first TRs

        empty:    - do nothing (just copy the data using 3dTcat)

        despike:  - NOTE: by default, this block is _not_ used
                  - use no extra options (so automask is default)

        tshift:   - align slices to the beginning of the TR
                  - use quintic interpolation for time series resampling
                        (option: -tshift_interp -quintic)

        volreg:   - align to third volume of first run, -zpad 1
                        (option: -volreg_align_to third)
                        (option: -volreg_zpad 1)
                  - use cubic interpolation for volume resampling
                        (option: -volreg_interp -cubic)

        blur:     - blur data using a 4 mm FWHM filter
                        (option: -blur_filter -1blur_fwhm)
                        (option: -blur_size 4)

        mask:     - apply union of masks from 3dAutomask on each run

        scale:    - scale each voxel to mean of 100, clip values at 200

        regress:  - use GAM regressor for each stim
                        (option: -regress_basis)
                  - compute the baseline polynomial degree, based on run length
                        (e.g. option: -regress_polort 2)
                  - output fit time series
                  - output ideal curves for GAM/BLOCK regressors
                  - output iresp curves for non-GAM/non-BLOCK regressors

    --------------------------------------------------
    OPTIONS: (information options, general options, block options)
             (block options are ordered by block)

        ------------ information options ------------

        -help                   : show this help
        -hist                   : show the module history
        -show_valid_opts        : show all valid options (brief format)
        -ver                    : show the version number

        ------------ general execution and setup options ------------

        -ask_me                 : ask the user about the basic options to apply

            When this option is used, the program will ask the user how they
            wish to set the basic options.  The intention is to give the user
            a feel for what options to apply (without using -ask_me).

        -bash                   : show example execution command in bash form

            After the script file is created, this program suggests how to run
            it (piping stdout/stderr through 'tee').  If the user is running
            the bash shell, this option will suggest the 'bash' form of a
            command to execute the newly created script.

            example of tcsh form for execution:

                tcsh -x proc.ED.8.glt |& tee output.proc.ED.8.glt

            example of bash form for execution:

                tcsh -x proc.ED.8.glt 2>&1 | tee output.proc.ED.8.glt

            Please see "man bash" or "man tee" for more information.

        -blocks BLOCK1 ...      : specify the processing blocks to apply

                e.g. -blocks volreg blur scale regress
                e.g. -blocks despike tshift volreg blur scale regress
                default: tshift volreg blur mask scale regress

            The user may apply this option to specify which processing blocks
            are to be included in the output script.  The order of the blocks
            may be varied, and blocks may be skipped.

            See also '-do_block' (e.g. '-do_block despike').

        -copy_anat ANAT         : copy the ANAT dataset to the results dir

                e.g. -copy_anat Elvis/mprage+orig

            This will apply 3dcopy to copy the anatomical dataset(s) to the
            results directory.  Note that if a +view is not given, 3dcopy will
            attempt to copy +acpc and +tlrc datasets, also.

            See also '3dcopy -help'.

        -copy_files file1 ...   : copy file1, etc. into the results directory

                e.g. -copy_files glt_AvsB.txt glt_BvsC.1D glt_eat_cheese.txt
                e.g. -copy_files contrasts/glt_*.txt

            This option allows the user to copy some list of files into the
            results directory.  This would happen before the tcat block, so
            such files may be used for other commands in the script (such as
            contrast files in 3dDeconvolve, via -regress_opts_3dD).

        -do_block BLOCK_NAME ...: add extra blocks in their default positions

                e.g. -do_block despike

            Currently, the 'despike' block is the only block not applied by
            default (in the processing script).  Any block not included in
            the default list can be added via this option.

            The default position for 'despike' is between 'tcat' and 'tshift'.

            This option should not be used with '-blocks'.

            See also '-blocks'.

        -dsets dset1 dset2 ...  : (REQUIRED) specify EPI run datasets

                e.g. -dsets Elvis_run1+orig Elvis_run2+orig Elvis_run3+orig
                e.g. -dsets Elvis_run*.HEAD

            The user must specify the list of EPI run datasets to analyze.
            When the runs are processed, they will be written to start with
            run 1, regardless of whether the input runs were just 6, 7 and 21.
        
            Note that when using a wildcard it is essential for the EPI
            datasets to be alphabetical, as that is how the shell will list
            them on the command line.  For instance, epi_run1+orig through
            epi_run11+orig is not alphabetical.  If they were specified via
            wildcard their order would end up as run1 run10 run11 run2 ...

            Note also that when using a wildcard it is essential to specify
            the datasets suffix, so that the shell doesn't put both the .BRIK
            and .HEAD filenames on the command line (which would make it twice
            as many runs of data).

        -keep_rm_files          : do not have script delete rm.* files at end

                e.g. -keep_rm_files

            The output script may generate temporary files in a block, which
            would be given names with prefix 'rm.'.  By default, those files
            are deleted at the end of the script.  This option blocks that
            deletion.

        -move_preproc_files     : move preprocessing files to preproc.data dir

            At the end of the output script, create a 'preproc.data' directory,
            and move most of the files there (dfile, outcount, pb*, rm*).

            See also -remove_preproc_files.

        -no_proc_command        : do not print afni_proc.py command in script

                e.g. -no_proc_command

            If this option is applied, the command used to generate the output
            script will be stored at the end of the script.

        -out_dir DIR            : specify the output directory for the script

                e.g. -out_dir ED_results
                default: SUBJ.results

            The AFNI processing script will create this directory and perform
            all processing in it.

        -remove_preproc_files   : delete pre-processed data

            At the end of the output script, delete the intermetiate data (to
            save disk space).  Delete dfile*, outcount*, pb* and rm*.

            See also -move_preproc_files.

        -script SCRIPT_NAME     : specify the name of the resulting script

                e.g. -script ED.process.script
                default: proc_subj

            The output of this program is a script file.  This option can be
            used to specify the name of that file.

            See also -scr_overwrite, -subj_id.

        -scr_overwrite          : overwrite any existing script

                e.g. -scr_overwrite

            If the output script file already exists, it will be overwritten
            only if the user applies this option.

            See also -script.

        -subj_id SUBJECT_ID     : specify the subject ID for the script

                e.g. -subj_id elvis
                default: SUBJ

            The subject ID is used in dataset names and in the output directory
            name (unless -out_dir is used).  This option allows the user to
            apply an appropriate naming convention.

        -tlrc_anat              : run @auto_tlrc on '-copy_anat' dataset

                e.g. -tlrc_anat

            After the regression block, run @auto_tlrc on the anatomical
            dataset provided by '-copy_anat'.  By default, warp the anat to
            align with TT_N27+tlrc, unless the '-tlrc_base' option is given.

            The -copy_anat option specifies which anatomy to transform.

            Please see '@auto_tlrc -help' for more information.
            See also -copy_anat, -tlrc_base, -tlrc_no_ss.

        -tlrc_base BASE_DSET    : run "@auto_tlrc -base BASE_DSET"

                e.g. -tlrc_base TT_icbm452+tlrc
                default: -tlrc_base TT_N27+tlrc

            This option is used to supply an alternate -base dataset for
            @auto_tlrc.  Otherwise, TT_N27+tlrc will be used.

            Note that the default operation of @auto_tlrc is to "skull strip"
            the input dataset.  If this is not appropriate, consider also the
            '-tlrc_no_ss' option.

            Please see '@auto_tlrc -help' for more information.
            See also -tlrc_anat, -tlrc_no_ss.

        -tlrc_no_ss             : add the -no_ss option to @auto_tlrc

                e.g. -tlrc_no_ss

            This option is used to tell @auto_tlrc not to perform the skull
            strip operation.

            Please see '@auto_tlrc -help' for more information.

        -tlrc_rmode RMODE       : apply RMODE resampling in @auto_tlrc

                e.g. -tlrc_rmode NN

            This option is used to apply '-rmode RMODE' in @auto_tlrc.

            Please see '@auto_tlrc -help' for more information.

        -tlrc_suffix SUFFIX     : apply SUFFIX to result of @auto_tlrc

                e.g. -tlrc_suffix auto_tlrc

            This option is used to apply '-suffix SUFFIX' in @auto_tlrc.

            Please see '@auto_tlrc -help' for more information.

        -verb LEVEL             : specify the verbosity of this script

                e.g. -verb 2
                default: 1

            Print out extra information during execution.

        ------------ block options ------------

        These options pertain to individual processing blocks.  Each option
        starts with the block name.

        -tcat_remove_first_trs NUM : specify how many TRs to remove from runs

                e.g. -tcat_remove_first_trs 3
                default: 0

            Since it takes several seconds for the magnetization to reach a
            steady state (at the beginning of each run), the initial TRs of
            each run may have values that are significantly greater than the
            later ones.  This option is used to specify how many TRs to
            remove from the beginning of every run.

        -despike_opts_3dDes OPTS... : specify additional options for 3dDespike

                e.g. -despike_opts_3dDes -nomask -ignore 2

            By default, 3dDespike is used with only -prefix.  Any other options
            must be applied via -despike_opts_3dDes.

            Note that the despike block is not applied by default.  To apply
            despike in the processing script, use either '-do_block despike'
            or '-blocks ... despike ...'.

            Please see '3dDespike -help' for more information.
            See also '-do_blocks', '-blocks'.

        -tshift_align_to TSHIFT OP : specify 3dTshift alignment option

                e.g. -tshift_align_to -slice 14
                default: -tzero 0

            By default, each time series is aligned to the beginning of the
            TR.  This option allows the users to change the alignment, and
            applies the option parameters directly to the 3dTshift command
            in the output script.

            It is likely that the user will use either '-slice SLICE_NUM' or
            '-tzero ZERO_TIME'.

            Note that when aligning to an offset other than the beginning of
            the TR, and when applying the -regress_stim_files option, then it
            may be necessary to also apply -regress_stim_times_offset, to
            offset timing for stimuli to later within each TR.

            Please see '3dTshift -help' for more information.
            See also '-regress_stim_times_offset'.
            
        -tshift_interp METHOD   : specify the interpolation method for tshift

                e.g. -tshift_interp -Fourier
                e.g. -tshift_interp -cubic
                default -quintic

            Please see '3dTshift -help' for more information.

        -tshift_opts_ts OPTS ... : specify extra options for 3dTshift

                e.g. -tshift_opts_ts -tpattern alt+z

            This option allows the user to add extra options to the 3dTshift
            command.  Note that only one -tshift_opts_ts should be applied,
            which may be used for multiple 3dTshift options.

            Please see '3dTshift -help' for more information.

        -volreg_align_to POSN   : specify the base position for volume reg

                e.g. -volreg_align_to last
                default: third

            This option takes 'first', 'third' or 'last' as a parameter.
            It specifies whether the EPI volumes are registered to the first
            or third volume (of the first run) or the last volume (of the last
            run).  The choice of 'first' or 'third' should correspond to when
            the anatomy was acquired before the EPI data.  The choice of 'last'
            should correspond to when the anatomy was acquired after the EPI
            data.

            The default of 'third' was chosen to go a little farther into the
            steady state data.

            Note that this is done after removing any volumes in the initial
            tcat operation.

            Please see '3dvolreg -help' for more information.
            See also -tcat_remove_first_trs, -volreg_base_ind and
            -volreg_base_dset.

        -volreg_base_dset DSET  : specify dset/sub-brick for volreg base

                e.g. -volreg_base_dset /users/rickr/subj10/vreg_base+orig'[4]'

            This option allow the user to specify an external dataset for the
            volreg base.  The user should apply sub-brick selection if the
            dataset has more than one volume.

            Since this volume is (currently) not being copied to the results
            directory, consider specifying it with a full pathname.

        -volreg_base_ind RUN SUB : specify volume/brick indices for base

                e.g. -volreg_base_ind 10 123
                default: 0 0

            This option allow the user to specify exactly which dataset and
            sub-brick to use as the base registration image.  Note that the
            SUB index applies AFTER the removal of pre-steady state images.

          * The RUN number is 1-based, matching the run list in the output
            shell script.  The SUB index is 0-based, matching the sub-brick of
            EPI time series #RUN.  Yes, one is 1-based, the other is 0-based.
            Life is hard.

            The user can apply only one of the -volreg_align_to and
            -volreg_base_ind options.

            See also -volreg_align_to, -tcat_remove_first_trs and
            -volreg_base_dset.

        -volreg_interp METHOD   : specify the interpolation method for volreg

                e.g. -volreg_interp -quintic
                e.g. -volreg_interp -Fourier
                default -cubic

            Please see '3dTvolreg -help' for more information.

        -volreg_opts_vr OPTS ... : specify extra options for 3dvolreg

                e.g. -volreg_opts_vr -noclip -nomaxdisp

            This option allows the user to add extra options to the 3dvolreg
            command.  Note that only one -volreg_opts_vr should be applied,
            which may be used for multiple 3dvolreg options.

            Please see '3dvolreg -help' for more information.

        -volreg_zpad N_SLICES   : specify number of slices for -zpad

                e.g. -volreg_zpad 4
                default: -volreg_zpad 1

            This option allows the user to specify the number of slices applied
            via the -zpad option to 3dvolreg.

        -blur_filter FILTER     : specify 3dmerge filter option

                e.g. -blur_filter -1blur_rms
                default: -1blur_fwhm

            This option allows the user to specify the filter option from
            3dmerge.  Note that only the filter option is set here, not the
            filter size.  The two parts were separated so that users might
            generally worry only about the filter size.

            Please see '3dmerge -help' for more information.
            See also -blur_size.

        -blur_size SIZE_MM      : specify the size, in millimeters

                e.g. -blur_size 6.0
                default: 4

            This option allows the user to specify the size of the blur used
            by 3dmerge.  It is applied as the 'bmm' parameter in the filter
            option (such as -1blur_fwhm).

            Please see '3dmerge -help' for more information.
            See also -blur_filter.

        -blur_opts_merge OPTS ... : specify extra options for 3dmerge

                e.g. -blur_opts_merge -2clip -20 50

            This option allows the user to add extra options to the 3dmerge
            command.  Note that only one -blur_opts_merge should be applied,
            which may be used for multiple 3dmerge options.

            Please see '3dmerge -help' for more information.

        -mask_type TYPE         : specify 'union' or 'intersection' mask type

                e.g. -mask_type intersection
                default: union

            This option is used to specify whether the mask applied to the
            analysis is the union of masks from each run, or the intersection.
            The only valid values for TYPE are 'union' and 'intersection'.

            This is not how to specify that no mask is applied at all, that is
            done by excluding the mask block with the '-blocks' option.

            Please see '3dAutomask -help', '3dMean -help' or '3dcalc -help'.
            See also -mask_dilate, -blocks.

        -mask_dilate NUM_VOXELS : specify the number of time to dilate the mask

                e.g. -mask_dilate 3
                default: 1

            By default, the masks generated from the EPI data are dilated by
            1 step (voxel), via the -dilate option in 3dAutomask.  With this
            option, the user may specify the dilation.  Valid integers must
            be at least zero.

            Please see '3dAutomask -help' for more information.
            See also -mask_type.

        -scale_max_val MAX      : specify the maximum value for scaled data

                e.g. -scale_max_val 1000
                default 200

            The scale step multiples the time series for each voxel by a
            scalar so that the mean for that particular run is 100 (allowing
            interpretation of EPI values as a percentage of the mean).

            Values of 200 represent a 100% change above the mean, and so can
            probably be considered garbage (or the voxel can be considered
            non-brain).  The output values are limited so as not to sacrifice
            the precision of the values of short datasets.  Note that in a
            short (2-byte integer) dataset, a large range of values means
            bits of accuracy are lost for the representation.

            No max will be applied if MAX is <= 100.

            Please see 'DATASET TYPES' in the output of '3dcalc -help'.
            See also -scale_no_max.

        -scale_no_max           : do not apply a limit to the scaled values

            The default limit for scaled data is 200.  Use of this option will
            remove any limit from being applied.

            See also -scale_max_val.

        -regress_basis BASIS    : specify the regression basis function

                e.g. -regress_basis 'BLOCK(4,1)'
                e.g. -regress_basis 'BLOCK(5)'
                e.g. -regress_basis 'TENT(0,14,8)'
                default: GAM

            This option is used to set the basis function used by 3dDeconvolve
            in the regression step.  This basis function will be applied to
            all user-supplied regressors (please let me know if there is need
            to apply different basis functions to different regressors).
        
            Please see '3dDeconvolve -help' for more information, or the link:
                http://afni.nimh.nih.gov/afni/doc/misc/3dDeconvolveSummer2004
            See also -regress_basis_normall, -regress_stim_times.

        -regress_basis_normall NORM : specify the magnitude of basis functions

                e.g. -regress_basis_normall 1.0

            This option is used to set the '-basis_normall' parameter in
            3dDeconvolve.  It specifies the height of each basis function.

            For the example basis functions, -basis_normall is not recommended.

            Please see '3dDeconvolve -help' for more information.
            See also -regress_basis.

        -regress_est_blur_epits      : estimate the smoothness of the EPI data

            This option specifies to run 3dFWHMx on each of the EPI datasets
            used for regression, the results of which are averaged.  These blur
            values are saved to the file blur_est.$subj.1D, along with any
            similar output from errts.

            These blur estimates may be input to AlphaSim, for any multiple
            testing correction done for this subject.  If AlphaSim is run at
            the group level, it is reasonable to average these estimates
            across all subjects (assuming they were scanned with the same
            protocol and at the same scanner).

            The mask block is required for this operation (without which the
            estimates are not reliable).  If masking is not desired for the
            regression, use the option '-regress_no_mask'.

            Please see '3dFWHMx -help' for more information.
            See also -regress_est_blur_errts, -regress_no_mask.

        -regress_est_blur_errts      : estimate the smoothness of the errts

            This option specifies to run 3dFWHMx on the errts dataset, output
            from the regression (by 3dDeconvolve).

            These blur estimates may be input to AlphaSim, for any multiple
            testing correction done for this subject.  If AlphaSim is run at
            the group level, it is reasonable to average these estimates
            across all subjects (assuming they were scanned with the same
            protocol and at the same scanner).

            Note that the errts blur estimates should be not only slightly
            more accurate than the epits blur estimates, but they should be
            slightly smaller, too (which is beneficial).

            The mask block is required for this operation (without which the
            estimates are not reliable).  If masking is not desired for the
            regression, use the option '-regress_no_mask'.

            Please see '3dFWHMx -help' for more information.
            See also -regress_est_blur_epits, -regress_no_mask.

        -regress_errts_prefix PREFIX : specify a prefix for the -errts option

                e.g. -regress_fitts_prefix errts

            This option is used to add a -errts option to 3dDeconvolve.  As
            with -regress_fitts_prefix, only the PREFIX is specified, to which
            the subject ID will be added.

            Please see '3dDeconvolve -help' for more information.
            See also -regress_fitts_prefix.

        -regress_fitts_prefix PREFIX : specify a prefix for the -fitts option

                e.g. -regress_fitts_prefix model_fit
                default: fitts

            By default, the 3dDeconvolve command in the script will be given
            a '-fitts fitts' option.  This option allows the user to change
            the prefix applied in the output script.

            The -regress_no_fitts option can be used to eliminate use of -fitts.

            Please see '3dDeconvolve -help' for more information.
            See also -regress_no_fitts.

        -regress_iresp_prefix PREFIX : specify a prefix for the -iresp option

                e.g. -regress_iresp_prefix model_fit
                default: iresp

            This option allows the user to change the -iresp prefix applied in
            the 3dDeconvolve command of the output script.  

            By default, the 3dDeconvolve command in the script will be given a
            set of '-iresp iresp' options, one per stimulus type, unless the
            regression basis function is GAM.  In the case of GAM, the response
            form is assumed to be known, so there is no need for -iresp.

            The stimulus label will be appended to this prefix so that a sample
            3dDeconvolve option might look one of these 2 examples:

                -iresp 7 iresp_stim07
                -iresp 7 model_fit_donuts

            The -regress_no_iresp option can be used to eliminate use of -iresp.

            Please see '3dDeconvolve -help' for more information.
            See also -regress_no_iresp, -regress_basis.

        -regress_make_ideal_sum IDEAL.1D : create IDEAL.1D file from regressors

                e.g. -regress_make_ideal_sum ideal_all.1D

            If the -regress_basis function is a single parameter function
            (either GAM or some form of BLOCK), then this option can be
            applied to create an ideal response curve which is the sum of
            the individual stimulus response curves.

            Use of this option will add a 3dTstat command to sum the regressor
            (of interest) columns of the 1D X-matrix, output by 3dDeconvolve.

            This is similar to the default behavior of creating ideal_STIM.1D
            files for each stimulus label, STIM.

            Please see '3dDeconvolve -help' and '3dTstat -help'.
            See also -regress_basis, -regress_no_ideals.

        -regress_motion_file FILE.1D  : use FILE.1D for motion parameters

                e.g. -regress_motion_file motion.1D

            Particularly if the user performs motion correction outside of
            afni_proc.py, they may wish to specify a motion parameter file
            other than dfile.rall.1D (the default generated in the volreg
            block).

            If the motion parameter file is in an external directory, the
            user should copy it via the -copy_files option.

            See also -copy_files.

        -regress_no_fitts       : do not supply -fitts to 3dDeconvolve

                e.g. -regress_no_fitts

            This option prevents the program from adding a -fitts option to
            the 3dDeconvolve command in the output script.

            See also -regress_fitts_prefix.

        -regress_no_ideals      : do not generate ideal response curves

                e.g. -regress_no_ideals

            By default, if the GAM or BLOCK basis function is used, ideal
            response curve files are generated for each stimulus type (from
            the output X matrix using '3dDeconvolve -x1D').  The names of the
            ideal response function files look like 'ideal_LABEL.1D', for each
            stimulus label, LABEL.

            This option is used to suppress generation of those files.

            See also -regress_basis, -regress_stim_labels.

        -regress_no_iresp       : do not supply -iresp to 3dDeconvolve

                e.g. -regress_no_iresp

            This option prevents the program from adding a set of -iresp
            options to the 3dDeconvolve command in the output script.

            By default -iresp will be used unless the basis function is GAM.

            See also -regress_iresp_prefix, -regress_basis.

        -regress_no_mask        : do not apply the mask in regression

            This option prevents the program from applying the mask dataset
            in the regression step.  It also prevents the program from applying
            the mask in the scaling step.

            If the user does not want to apply a mask in the regression
            analysis, but wants the full_mask dataset for other reasons
            (such as computing blur estimates), this option is needed.

            See also -regress_est_blur_epits, -regress_est_blur_errts.

        -regress_no_motion      : do not apply motion params in 3dDeconvolve

                e.g. -regress_no_motion

            This option prevents the program from adding the registration
            parameters (from volreg) to the 3dDeconvolve command.

        -regress_opts_3dD OPTS ...   : specify extra options for 3dDeconvolve

                e.g. -regress_opts_3dD -gltsym ../contr/contrast1.txt  \
                                       -glt_label 1 FACEvsDONUT        \
                                       -xjpeg Xmat

            This option allows the user to add extra options to the 3dDeconvolve
            command.  Note that only one -regress_opts_3dD should be applied,
            which may be used for multiple 3dDeconvolve options.

            Please see '3dDeconvolve -help' for more information, or the link:
                http://afni.nimh.nih.gov/afni/doc/misc/3dDeconvolveSummer2004

        -regress_polort DEGREE  : specify the polynomial degree of baseline

                e.g. -regress_polort 2
                default: 1 + floor(run_length / 150.0)

            3dDeconvolve models the baseline for each run separately, using
            Legendre polynomials (by default).  This option specifies the
            degree of polynomial.  Note that this will create DEGREE * NRUNS
            regressors.

            The default is computed from the length of a run, in seconds, as
            shown above.  For example, if each run were 320 seconds, then the
            default polort would be 3 (cubic).

            Please see '3dDeconvolve -help' for more information.

        -regress_RONI IND1 ...  : specify a list of regressors of no interest

                e.g. -regress_RONI 1 17 22

            Use this option flag regressors as ones of no interest, meaning
            they are applied to the baseline (for full-F) and the corresponding
            beta weights are not output (by default at least).

            The indices in the list should match those given to 3dDeconvolve.
            They start at 1 first with the main regressors, and then with any
            extra regressors (given via -regress_extra_stim_files).  Note that
            these do not apply to motion regressors.

            The user is encouraged to check the 3dDeconvolve command in the
            processing script, to be sure they are applied correctly.

        -regress_stim_labels LAB1 ...   : specify labels for stimulus types

                e.g. -regress_stim_labels houses faces donuts
                default: stim01 stim02 stim03 ...

            This option is used to apply a label to each stimulus type.  The
            number of labels should equal the number of files used in the
            -regress_stim_times option, or the total number of columns in the
            files used in the -regress_stim_files option.

            These labels will be applied as '-stim_label' in 3dDeconvolve.

            Please see '3dDeconvolve -help' for more information.
            See also -regress_stim_times, -regress_stim_labels.

        -regress_stim_times FILE1 ... : specify files used for -stim_times

                e.g. -regress_stim_times ED_stim_times*.1D
                e.g. -regress_stim_times times_A.1D times_B.1D times_C.1D

            3dDeconvolve will be run using '-stim_times'.  This option is
            used to specify the stimulus timing files to be applied, one
            file per stimulus type.  The order of the files given on the 
            command line will be the order given to 3dDeconvolve.  Each of
            these timing files will be given along with the basis function
            specified by '-regress_basis'.

            The user must specify either -regress_stim_times or 
            -regress_stim_files if regression is performed, but not both.
            Note the form of the files is one row per run.  If there is at
            most one stimulus per run, please add a trailing '*'.

            Labels may be specified using the -regress_stim_labels option.

            These two examples of such files are for a 3-run experiment.  In
            the second example, there is only 1 stimulus at all, occurring in
            run #2.

                e.g.            0  12.4  27.3  29
                                *
                                30 40 50

                e.g.            *
                                20 *
                                *

            Please see '3dDeconvolve -help' for more information, or the link:
                http://afni.nimh.nih.gov/afni/doc/misc/3dDeconvolveSummer2004
            See also -regress_stim_files, -regress_stim_labels, -regress_basis,
                     -regress_basis_normall, -regress_polort.

        -regress_stim_files FILE1 ... : specify TR-locked stim files

                e.g. -regress_stim_times ED_stim_file*.1D
                e.g. -regress_stim_times stim_A.1D stim_B.1D stim_C.1D

            Without the -regress_use_stim_files option, 3dDeconvolve will be
            run using '-stim_times', not '-stim_file'.  The user can still
            specify the 3dDeconvolve -stim_file files here, but they would
            then be converted to -stim_times files using the script,
            make_stim_times.py .

            It might be more educational for the user to run make_stim_times.py
            outside afni_proc.py (such as was done before example 2, above), or
            to create the timing files directly.

            Each given file can be for multiple stimulus classes, where one
            column is for one stim class, and each row represents a TR.  So
            each file should have NUM_RUNS * NUM_TRS rows.

            The stim_times files will be labeled stim_times.NN.1D, where NN
            is the stimulus index.

            Note that if the stimuli were presented at a fixed time after
            the beginning of a TR, the user should consider the option,
            -regress_stim_times_offset, to apply that offset.

            ---

            If the -regress_use_stim_files option is provided, 3dDeconvolve
            will be run using each stim_file as a regressor.  The order of the
            regressors should match the order of any labels, provided via the
            -regress_stim_labels option.

            Please see '3dDeconvolve -help' for more information, or the link:
                http://afni.nimh.nih.gov/afni/doc/misc/3dDeconvolveSummer2004
            See also -regress_stim_times, -regress_stim_labels, -regress_basis,
                     -regress_basis_normall, -regress_polort,
                     -regress_stim_times_offset, -regress_use_stim_files.

        -regress_extra_stim_files FILE1 ... : specify extra stim files

                e.g. -regress_extra_stim_files resp.1D cardiac.1D
                e.g. -regress_extra_stim_files regs_of_no_int_*.1D

            Use this option to specify extra files to be applied with the
            -stim_file option in 3dDeconvolve (as opposed to the more usual
            -stim_times).  These files will not be converted to stim_times.

            Corresponding labels can be given with -regress_extra_stim_labels.

            See also -regress_extra_stim_labels, -regress_RONI.

        -regress_extra_stim_labels LAB1 ... : specify extra stim file labels

                e.g. -regress_extra_stim_labels resp cardiac

            If -regress_extra_stim_files is given, the user may want to specify
            labels for those extra stimulus files.  This option provides that
            mechanism.  If this option is not given, default labels will be
            assigned (like stim17, for example).

            Note that the number of entires in this list should match the
            number of extra stim files.

            See also -regress_extra_stim_files.

        -regress_stim_times_offset OFFSET : add OFFSET to -stim_times files

                e.g. -stim_times_offset 1.25
                default: 0

            If the -regress_stim_files option is used (so the script converts
            -stim_files to -stim_times before 3dDeconvolve), the user may want
            to add an offset to the times in the output timing files.

            For example, if -tshift_align_to is applied, and the user chooses
            to align volumes to the middle of the TR, it would be appropriate
            to add TR/2 to the times of the stim_times files.

            This OFFSET will be applied to the make_stim_times.py command in
            the output script.

            Please see 'make_stim_times.py -help' for more information.
            See also -regress_stim_files, -regress_use_stim_files,
                     -tshift_align_to.

        -regress_use_stim_times : use -stim_file in regression, not -stim_times

            The default operation of afni_proc.py is to convert TR-locked files
            for the 3dDeconvolve -stim_file option to timing files for the
            3dDeconvolve -stim_times option.

            If the -regress_use_stim_times option is provided, then no such
            conversion will take place.  This assumes the -regress_stim_files
            option is applied to provide such -stim_file files.

            This option has been renamed from '-regress_no_stim_times'.

            Please see '3dDeconvolve -help' for more information.
            See also -regress_stim_files, -regress_stim_times, 
                     -regress_stim_labels.

    - R Reynolds  Dec, 2006                             thanks to Z Saad
    ===========================================================================




AFNI program: afni_vcheck
Usage: afni_version
 Prints out the AFNI version with which it was compiled,
 and checks across the Web for the latest version available.
N.B.: Doing the check across the Web will mean that your
      computer's access to our server will be logged here.
      If you don't want this, don't use this program!



AFNI program: aiv
Usage: aiv [-v] [-q] [-title] [-p xxxx ] image ...
AFNI Image Viewer program.
Shows the 2D images on the command line in an AFNI-like image viewer.
Can also read images in NIML '' format from a TCP/IP socket.
Image file formats are those supported by to3d:
 * various MRI formats (e.g., DICOM, GEMS I.xxx)
 * raw PPM or PGM
 * JPEG (if djpeg is in the path)
 * GIF, TIFF, BMP, and PNG (if netpbm is in the path)

The '-v' option will make aiv print out the image filenames
as it reads them - this can be a useful progress meter if
the program starts up slowly.

The '-q' option tells the program to be very quiet.

The '-title WORD' option titles the window WORD. 
The default is the name of the image file if only one is 
specified on the command line. If many images are read in
the default window title is 'Images'.
The '-p xxxx' option will make aiv listen to TCP/IP port 'xxxx'
for incoming images in the NIML '' format.  The
port number must be between 1024 and 65535, inclusive.  For
conversion to NIML '' format, see program im2niml.

Normally, at least one image must be given on the command line.
If the '-p xxxx' option is used, then you don't have to input
any images this way; however, since the program requires at least
one image to start up, a crude 'X' will be displayed.  When the
first image arrives via the socket, the 'X' image will be replaced.
Subsequent images arriving by socket will be added to the sequence.

-----------------------------------------------------------------
Sample program fragment, for sending images from one program
into a copy of aiv (which that program also starts up):

#include "mrilib.h"
NI_stream ns; MRI_IMAGE *im; float *far; int nx,ny;
system("aiv -p 4444 &");                               /* start aiv */
ns = NI_stream_open( "tcp:localhost:4444" , "w" ); /* connect to it */
while(1){
  /** ......... create 2D nx X ny data into the far array .........**/
  im = mri_new_vol_empty( nx , ny , 1 , MRI_float );  /* fake image */
  mri_fix_data_pointer( far , im );                  /* attach data */
  NI_element nel = mri_to_niml(im);      /* convert to NIML element */
  NI_write_element( ns , nel , NI_BINARY_MODE );     /* send to aiv */
  NI_free_element(nel); mri_clear_data_pointer(im); mri_free(im);
}
NI_stream_writestring( ns , "" ) ;
NI_stream_close( ns ) ;  /* do this, or the above, if done with aiv */

-- Author: RW Cox

++ Compile date = Mar 13 2009




AFNI program: align_epi_anat.py
#++ align_epi_anat version: 1.17

    ===========================================================================
    align_epi_anat.py     - align EPI to anatomical datasets or vice versa
    
    This python script computes the alignment between an EPI and anatomical
    structural dataset and applies the resulting transformation to one or the
    other to bring them into alignment.
    This python script computes the transforms needed to align EPI and  
    anatomical datasets using a cost function tailored for this purpose. The  
    script combines multiple transformations, thereby minimizing the amount of 
    interpolation to the data.
    
    Basic Usage:
      align_epi_anat.py -anat anat+orig -epi epi+orig -epi_base 5
    
    The user must provide EPI and anatomical datasets and specify the EPI
    sub-brick to use as a base in the alignment.  

    Internally, the script always aligns the anatomical to the EPI dataset,
    and the resulting transformation is saved to a 1D file. 
    As a user option, The inverse of this transformation may be applied to the 
    EPI dataset in order to align it to the anatomical data instead.

    This program generates several kinds of output in the form of datasets
    and transformation matrices which can be applied to other datasets if
    needed. Time-series volume registration, oblique data transformations and
    talairach transformations will be combined as needed.
    
    Depending upon selected options, the script's output contains the following:
        Datasets:
          ANAT_al+orig: A version of the anatomy that is aligned to the EPI
          EPI_al+orig: A version of the EPI dataset aligned to the anatomy
          EPI_al+tlrc: A version of the EPI dataset aligned to a standard
                       template
        These transformations include slice timing correction and
          time-series registation by default.

        Transformation matrices:
          ANAT_al_mat.aff12.1D: matrix to align anatomy to the EPI
          EPI_al_mat.aff12.1D:  matrix to align EPI to anatomy 
                                   (inverse of above)
          EPI_vr_al_mat.aff12.1D: matrix to volume register EPI
          EPI_reg_al_mat.aff12.1D: matrix to volume register and align epi
                                      to anatomy (combination of the two
                                      previous matrices)

        Motion parameters from optional volume registration:
          EPI_reg_al_motion.1D: motion parameters from EPI time-series 
                                registration
          
    where the uppercase "ANAT" and "EPI" are replaced by the names of the
    input datasets, and the suffix can be changed from "_al" as a user
    option.
          
        You can use these transformation matrices to align other datasets:
         3dAllineate -cubic -1Dmatrix_apply epi_r1_al_mat.aff12.1D  \
                     -prefix epi_alman epi_r2+orig

             
    The goodness of the alignment should always be assessed. At the face of it,
    most of 3dAllineate's cost functions, and those of registration programs
    from other packages, will produce a plausible alignment but it may not be
    the best. You need to examine the results carefully if alignment quality is
    crucial for your analysis.

    In the absence of a gold standard, and given the low contrast of EPI data,
    it is difficult to judge alignment quality by just looking at the two
    volumes. This is the case, even when you toggle quickly between one volume
    and the next; turning overlay off and using 'u' key in the slice window.
    To aid with the assessment of alignment, you can use the -AddEdge option or
    call the @AddEdge script directly. See the help for @AddEdge for more
    information on that script.

    The default options assume the epi and anat datasets start off fairly close,
    as is normally the case when the epi dataset precedes or follows an 
    anatomical dataset acquisition. If the two data are acquired over separate
    sessions, or accurate coordinate data is not available in the dataset header
    (as sometimes occurs for oblique data), various options allow for larger
    movement including "-cmass cmass", "-big_move" and "-giant_move". Each of
    these options is described below. If datasets do not share the same space
    at all, it may be necessary to use the @Align_Centers script first.
    
    Although this script has been developed primarily for aligning anatomical T1
    data with EPI BOLD data, it has also been successfully applied for aligning
    similar modality data together also including T1-SPGR to T1-SPGR, T1-FLAIR
    to T1-SPGR, EPI to EPI, T1-SPGR at 7T to T1-SPGR at 3T, EPI-rat1 to
    EPI-rat2, .... If this kind of alignment is required, the default cost
    function, the localized Pearson Correlation (lpc), is not appropriate.
    Other cost functions like lpa or nmi have been seen to work well using 
    "-cost lpa".
        
    ---------------------------------------------
    REQUIRED OPTIONS:
    
    -epi dset   : name of EPI dataset
    -anat dset  : name of structural dataset
    -epi_base   : the epi base used in alignment 
                     (0/mean/median/max/subbrick#)

    MAJOR OPTIONS:
    -help       : this help message

    -anat2epi   : align anatomical to EPI dataset (default)
    -epi2anat   : align EPI to anatomical dataset
                  

    -suffix ssss: append the suffix to the original anat/epi dataset to use
                     in the resulting dataset names (default is "_al")
     
    -child_epi dset1 dset2 ... : specify other EPI datasets to align.
        Time series volume registration will be done to the same
        base as the main parent EPI dataset. 

    -child_anat dset1 dset2 ... : specify other anatomical datasets to align.
        The same transformation that is computed for the parent anatomical
        dataset is applied to each of the child datasets. This only makes
        sense for anat2epi transformations. Skullstripping is not done for
        the child anatomical dataset.

    -AddEdge    : run @AddEdge script to create composite edge images of
                  the base epi or anat dataset, the pre-aligned dataset and 
                  the aligned dataset. Datasets are placed in a separate
                  directory named AddEdge. The @AddEdge can then be used
                  without options to drive AFNI to show the epi and anat
                  datasets with the edges enhanced. For the -anat2epi case
                  (the default), the anat edges are shown in purple, and the
                  epi edges are shown in cyan (light blue). For the -epi2anat
                  case, the anat edges are shown in cyan, and the epi edges
                  are purple. For both cases, overlapping edges are shown in
                  dark purple.

    -big_move   : indicates that large displacement is needed to align the
                  two volumes. This option is off by default.
    -giant_move : even larger movement required - uses cmass, two passes and
                  very large angles and shifts. May miss finding the solution
                  in the vastness of space, so use with caution

    -partial_coverage: indicates that the EPI dataset covers only a part of 
                  the brain. Alignment will try to guess which direction should
                  not be shifted If EPI slices are known to be a specific 
                  orientation, use one of these other partial_xxxx options.
    -partial_axial
    -partial_coronal 
    -partial_sagittal

    -keep_rm_files : keep all temporary files (default is to remove them)
    -prep_only  : do preprocessing steps only
    -verb nn    : provide verbose messages during processing (default is 0)
    -anat_has_skull yes/no: Anat is assumed to have skull ([yes]/no)
    -epi_strip  :  method to mask brain in EPI data 
                   ([3dSkullStrip]/3dAutomask/None)
    -volreg_method : method to do time series volume registration of EPI data 
                   ([3dvolreg],3dWarpDrive). 3dvolreg is for 6 parameter 
                   (rigid-body) and 3dWarpDrive is for 12 parameter.

    A template registered anatomical dataset such as a talairach-transformed
       dataset may be additionally specified so that output data are
       in template space. The advantage of specifying this transform here is
       that all transformations are applied simultaneously, thereby minimizing 
       data interpolation.
       
    -tlrc_apar ANAT+tlrc : structural dataset that has been aligned to
                  a master template such as a tlrc dataset. If this option
                  is supplied, then an epi+tlrc dataset will be created.


    Other options:
    -ex_mode       : execute mode (echo/dry_run/quiet/[script]). "dry_run" can
                     be used to show the commands that would be executed 
                     without actually running them. 
                     "echo" shows the commands as they are executed.
                     "quiet" doesn't display commands at all.
                     "script" is like echo but doesn't show stdout, stderr 
                     header lines and "cd" lines.
                     "dry_run" can be used to generate scripts which can be
                     further customized beyond what may be available through
                     the options of this program.
    -Allineate_opts '-ssss  -sss' : options to use with 3dAllineate. Default
                     options are 
                     "-weight_frac 1.0 -maxrot 6 -maxshf 10 -VERB -warp aff "
    -volreg        : do volume registration on EPI dataset before alignment
                     ([on]/off)
    -volreg_opts   : options to use with 3dvolreg
    -volreg_base   : the epi base used in time series volume registration.
                     The default is to use the same base as the epi_base.
                     If another subbrick or base type is used, an additional
                     transformation will be computed between the volume
                     registration and the epi_base
                     (0/mean/median/max/subbrick#)

    -tshift        : do time shifting of EPI dataset before alignment ([on]/off)
    -tshift_opts   : options to use with 3dTshift
                     The script will determine if slice timing correction is
                     necessary unless tshift is set to off.

    -deoblique     : deoblique datasets before alignment ([on]/off)
    -deoblique_opts: options to use with 3dWarp deobliquing
                     The script will try to determine if either EPI or anat data
                     is oblique and do the initial transformation to align anat
                     to epi data using the oblique transformation matrices
                     in the dataset headers.
    
    -master_epi    : master grid resolution for aligned epi output
    -master_tlrc   : master grid resolution for epi+tlrc output
    -master_anat   : master grid resolution for aligned anatomical data output
                     (SOURCE/BASE/MIN_DXYZ/dsetname/n.nn)
                     Each of the 'master' options can be set to SOURCE,BASE,
                     a specific master dataset, MIN_DXYZ or a specified cubic 
                     voxel size in mm. 
                     
                     MIN_DXYZ uses the smallest voxel dimension as the basis
                     for cubic output voxel resolution within the bounding box
                     of the BASE dataset.
                     
                     SOURCE and BASE are used as in 3dAllineate help.
                     
                     The default value for master_epi and master_anat is SOURCE,
                     that is the output resolution and coordinates should be
                     the same as the input. This is appropriate for small
                     movements.
                   
                     For cases where either dataset is oblique (and larger
                     rotations can occur), the default becomes MIN_DXYZ.
                     
                     The default value for master_tlrc is MIN_DXYZ.

    Other obscure and experimental options that should only be handled with 
       care, lest they get out, are visible with -full_help or -option_help.

    Examples:
      # align anat to sub-brick 5 of epi+orig. In addition, do slice timing
      # correction on epi+orig and register all sub-bricks to sub-brick 5
      # (Sample data files are in AFNI_data4/sb23 in sample class data)

      align_epi_anat.py -anat sb23_mpra+orig -epi epi_r03+orig     \
                        -epi_base 5
      
      # Instead of aligning the anatomy to an epi, transform the epi
      # to match the anatomy. Transform other epi run datasets to be
      # in alignment with the first epi datasets and with the anatomical
      # reference dataset. Note that all epi sub-bricks from all runs
      # are transformed only once in the process combining volume
      # registration and alignment to the anatomical dataset in a single
      # transformation matrix

      align_epi_anat.py -anat sb23_mpra+orig -epi epi_r03+orig      \
                        -epi_base 5 -child_epi epi_r??+orig.HEAD    \
                        -epi2anat -suffix al2anat
      
      # Bells and whistles:
      # - create talairach transformed epi datasets (still one transform)
      # - do not execute, just show the commands that would be executed.
      #   These commands can be saved in a script or modified.
      # + a bunch of other options to tickle your mind
      # The talairach transformation requires auto-talairaching 
      # the anatomical dataset first

      @auto_tlrc -base ~/abin/TT_N27+tlrc -input sb23_mpra+orig
      align_epi_anat.py -anat sb23_mpra+orig -epi epi_r03+orig      \
                        -epi_base 6 -child_epi epi_r??+orig.HEAD    \
                        -ex_mode dry_run -epi2anat -suffix _altest  \
                        -tlrc_apar sb23_mpra_at+tlrc


    Our HBM 2008 abstract describing the alignment tools is available here:
      http://afni.nimh.nih.gov/sscc/rwcox/abstracts






AFNI program: byteorder
Usage: byteorder
Prints out a string indicating the byte order of the CPU on
which the program is running.  For this computer, we have:

CPU byte order = LSB_FIRST



AFNI program: cat_matvec
Usage: cat_matvec [-MATRIX | -ONELINE] matvec_spec matvec_spec ...

Catenates 3D rotation+shift matrix+vector transformations.
Each matvec_spec is of the form

  mfile [-opkey]

'mfile' specifies the matrix, and can take 4(ish) forms:

=== FORM 1 ===
mfile is the name of an ASCII file with 12 numbers arranged
in 3 lines:
      u11 u12 u13 v1
      u21 u22 u23 v2
      u31 u32 u33 v3
where each 'uij' and 'vi' is a number.  The 3x3 matrix [uij]
is the matrix of the transform, and the 3-vector [vi] is the
shift.  The transform is [xnew] = [uij]*[xold] + [vi].

=== FORM 1a === [added 24 Jul 2007]
mfile is the name of an ASCII file with multiple rows, each
containing 12 numbers in the order
  u11 u12 u13 v1 u21 u22 u23 v2 u31 u32 u33 v3
The filename must end in the characters '.aff12.1D', as output
by the '-1Dmatrix_save' option in 3dAllineate and 3dvolreg.
Each row of this file is treated as a separate matrix, and
multiple matrices will be computed.
** N.B.: At most ONE input matrix can be in this format! **

=== FORM 2 ===
mfile is of the form 'dataset::attribute', where 'dataset'
is the name of an AFNI dataset, and 'attribute' is the name
of an attribute in the dataset's header that contains a
matrix+vector.  Examples:
 'fred+orig::VOLREG_MATVEC_000000'        = fred+orig from 3dvolreg
 'fred+acpc::WARP_DATA'                   = fred+acpc warped in AFNI
 'fred+orig::WARPDRIVE_MATVEC_FOR_000000' = fred+orig from 3dWarpDrive
 'fred+orig::ROTATE_MATVEC_000000'        = fred+orig from 3drotate
 For matrices to turn voxel coordinates to dicom:
 'fred+orig::IJK_TO_CARD_DICOM'   
 'fred+orig::IJK_TO_DICOM_REAL'        

Note that both of VOLREG_MATVEC_ and ROTATE_MATVEC_ are usually
accompanied with VOLREG_CENTER_OLD and VOLREG_CENTER_BASE or
ROTATE_CENTER_OLD and ROTATE_CENTER_BASE attributes.
These center attributes are automatically taken into account in
cat_matvec's output.

=== FORM 3 ===
mfile is of the form
 'MATRIX(u11,u12,u13,v1,u21,u22,u23,v2,u31,u32,u33,v3)'
directly giving all 12 numbers on the command line.  You will
need the 'forward single quotes' around this argument.

=== FORM 4 ===
mfile is of the form
 '-rotate xI yR zA'
where 'x', 'y', and 'z' are angles in degrees, specifying rotations
about the I, R, and A axes respectively.  The letters 'I', 'R', 'A'
specify the axes, and can be altered as in program 3drotate.
(The 'quotes' are mandatory here because the argument contains spaces.)


=== COMPUTATIONS ===
If [U] [v] are the matrix/vector for the first mfile, and
   [A] [b] are the matrix/vector for the second mfile, then
the catenated transformation is
  matrix = [A][U]   vector = [A][v] + [b]
That is, the second mfile transformation follows the first.
** Thus, the order of matrix multiplication is exactly the  **
** opposite of the order of the inputs on the command line! **

The optional 'opkey' (operation key) following each mfile
starts with a '-', and then is a set of letters telling how
to treat the input.  The opkeys currently defined are:

  -I = invert the transformation:
                     -1              -1
       [xold] = [uij]  [xnew] - [uij]  [vi]

  -P = Do a polar decomposition on the 3x3 matrix part 
       of the mfile. This would result in an orthogonal
       matrix (rotation only, no scaling) Q that is closest,
       in the Frobenius distance sense, to the input matrix A.
    Note: if A = R * S * E, where R, S and E are the Rotation,
       Scale, and shEar matrices, respctively, Q does not 
       necessarily equal R because of interaction; Each of R,
       S and E affects most of the columns in matrix A.

  -IP = -I followed by -P

  -S = square root of the matrix
    Note: Not all matrices have square roots!
       The square root of a matrix will do 'half' the transformation.
       One application: 3dLRflip + 3dAllineate to register a volume
       to its mirror image, then apply half the transformation to
       bring it into vertical alignment.

The transformation resulting by catenating the transformations
is written to stdout in the same 3x4 ASCII file format.  This can
be used as input to '3drotate -matvec_dicom' (provided [uij] is a
proper orthogonal matrix), or to '3dWarp -matvec_xxx'.

N.B.: If only 9 numbers can be read from an mfile, then those
      values form the [uij] matrix, and the vector is set to zero.
N.B.: The '-MATRIX' option indicates that the resulting matrix will
      be written to stdout in the 'MATRIX(...)' format (FORM 3).
      This feature could be used, with clever scripting, to input
      a matrix directly on the command line to program 3dWarp.
N.B.: The '-ONELINE' option indicates that the resulting matrix
      will simply be written as 12 numbers on one line.
N.B.: If form 1a (.aff12.1D) is used to compute multiple matrices,
      then the output matrices are written to stdout, one matrix
      per line.



AFNI program: ccalc
Usage: ccalc [-form FORM] [-eval 'expr']
Usage mode 1: Interactive numerical calculator
    Interactive numerical calculator, using the 
    same expression syntax as 3dcalc. 
    No command line parameters are permitted in
    usage 1 mode.
Usage mode 2: Command line expression calculator
    Evaluate an expression specified on command
    line, return answer and quit.
    Optional parameters: (must come first)
    -form FORM: Format output in a nice form
                Choose from:
                double: Macho numbers (default).
                nice: Metrosexual output.
                int (or rint): Rounded to nearest integer.
                cint: Rounded up.
                fint: Rounded down.
                %n.mf: custom format string, used as in printf.
                   format string can contain %%, \n and other
                   regular characters.
                   See man fprintf and man printf for details.
                You can also replace:
                   -form int    with    -i
                   -form nice   with    -n
                   -form double with    -d
                   -form fint   with    -f
                   -form cint   with    -c
    Mandatory parameter: (must come last on command line)
    -eval EXPR: EXPR is the expression to evaluate.
                Example: ccalc -eval '3 + 5 * sin(22)' 
                     or: ccalc -eval 3 +5 '*' 'sin(22)'
                You can not use variables in EXPR
                as you do with 3dcalc.
    Example with formatting:
        ccalc -form '********\n%6.4f%%\n********' -eval '100*328/457'
    gives:
        ********
        0.7177%
        ********
    Try also:
        ccalc -i 3.6
        ccalc -f 3.6
        ccalc -c 3.6
        ccalc -form '%3.5d' 3.3
        ccalc -form '**%5d**' 3.3
        ccalc -form '**%-5d**' 3.3

    SECRET: You don't need to use -eval if you are 
            not using any other options. I hate typing
            it for quick command line calculations. 
            But that feature might be removed in the
            future, so always use -eval when you are 
            using this program in your scripts.



AFNI program: cdf
Usage 1: cdf [-v] -t2p statname t params
Usage 2: cdf [-v] -p2t statname p params
Usage 3: cdf [-v] -t2z statname t params

This program does various conversions using the cumulative distribution
function (cdf) of certain canonical probability functions.  The optional
'-v' indicates to be verbose -- this is for debugging purposes, mostly.
Use this option if you get results you don't understand!

Usage 1: Converts a statistic 't' to a tail probability.
Usage 2: Converts a tail probability 'p' to a statistic.
Usage 3: Converts a statistic 't' to a N(0,1) value (or z-score)
         that has the same tail probability.

The parameter 'statname' refers to the type of distribution to be used.
The numbers in the params list are the auxiliary parameters for the
particular distribution.  The following table shows the available
distribution functions and their parameters:

   statname  Description  PARAMETERS
   --------  -----------  ----------------------------------------
       fico  Cor          SAMPLES  FIT-PARAMETERS  ORT-PARAMETERS
       fitt  Ttest        DEGREES-of-FREEDOM
       fift  Ftest        NUMERATOR and DENOMINATOR DEGREES-of-FREEDOM
       fizt  Ztest        N/A
       fict  ChiSq        DEGREES-of-FREEDOM
       fibt  Beta         A (numerator) and B (denominator)
       fibn  Binom        NUMBER-of-TRIALS and PROBABILITY-per-TRIAL
       figt  Gamma        SHAPE and SCALE
       fipt  Poisson      MEAN

EXAMPLES:
 Goal:    find p-value for t-statistic of 5.5 with 30 degrees of freedom
 COMMAND: cdf -t2p fitt 5.5 30
 OUTPUT:  p = 5.67857e-06

 Goal:    find F(8,200) threshold that gives a p-value of 0.001
 COMMAND: cdf -p2t fift 0.001 8 200
 OUTPUT:  t = 3.4343

The same functionality is also available in 3dcalc, 1deval, and
ccalc, using functions such as 'fift_t2p(t,a,b)'.  In particular,
if you are scripting, ccalc is probably better to use than cdf,
since the output of
  ccalc -expr 'fitt_t2p(3,20)'
is the string '0.007076', while the output of
  cdf -t2p fitt 3 20
is the string 'p = 0.0070759'.




AFNI program: cjpeg
usage: /var/www/html/pub/dist/bin/linux_gcc32/cjpeg [switches] [inputfile]
Switches (names may be abbreviated):
  -quality N     Compression quality (0..100; 5-95 is useful range)
  -grayscale     Create monochrome JPEG file
  -optimize      Optimize Huffman table (smaller file, but slow compression)
  -progressive   Create progressive JPEG file
  -targa         Input file is Targa format (usually not needed)
Switches for advanced users:
  -dct int       Use integer DCT method (default)
  -dct fast      Use fast integer DCT (less accurate)
  -dct float     Use floating-point DCT method
  -restart N     Set restart interval in rows, or in blocks with B
  -smooth N      Smooth dithered input (N=1..100 is strength)
  -maxmemory N   Maximum memory to use (in kbytes)
  -outfile name  Specify name for output file
  -verbose  or  -debug   Emit debug output
Switches for wizards:
  -baseline      Force baseline quantization tables
  -qtables file  Use quantization tables given in file
  -qslots N[,...]    Set component quantization tables
  -sample HxV[,...]  Set component sampling factors
  -scans file    Create multi-scan JPEG per script file



AFNI program: count
Usage: count [options] bot top [step]

* Produces many numbered copies of the root and/or suffix,
    counting from 'bot' to 'top' with stride 'step'.
* If 'bot' > 'top', counts backwards with stride '-step'.
* If step is of the form 'R#', then '#' random counts are produced
    in the range 'bot..top' (inclusive).
* If step is of the form 'S', then a random sequence of unique integers
    in the range 'bot..top' (inclusive) is output.
    A number after S ('S#') indicates the number of unique integers
    to output. If # exceeds the number of unique values, the shuffled
    sequence will simply repeat itself. (N.B.: 'S' is for 'Shuffle'.)
* 'bot' and 'top' must not be negative; step must be positive (defaults to 1).

Options:
  -seed        seed number for random number generator (for S and R above)
  -sseed       seed string for random number generator (for S and R above)
  -column      writes output, one number per line (with root and suffix, if any)
  -digits n    prints numbers with 'n' digits [default=4]
  -root rrr    prints string 'rrr' before the number [default=empty]
  -sep s       prints single character 's' between the numbers [default=blank]
                 [normally you would not use '-sep' with '-column']
  -suffix sss  prints string 'sss' after the number [default=empty]
  -scale fff   multiplies each number by the factor 'fff';
                 if this option is used, -digits is ignored and
                 the floating point format '%g' is used for output.
                 ('fff' can be a floating point number.)
  -comma       put commas between the outputs, instead of spaces
                 (same as '-sep ,')
  -skipnmodm n m   skip over numbers with a modulus of n with m
                  -skipnmodm 15 16 would skip 15, 31, 47, ...
               not valid with random number sequence options

The main application of this program is for use in C shell programming:
  foreach fred ( `count 1 20` )
     mv wilma.${fred} barney.${fred}
  end
The backward quote operator in the foreach statement executes the
count program, captures its output, and puts it on the command line.
The loop body renames each file wilma.0001 to wilma.0020 to barney.0001
to barney.0020.  Read the man page for csh to get more information.  In
particular, the csh built-in command '@' can be useful.



AFNI program: dicom_hdr
Usage: dicom_hdr [options] fname [...]
Prints information from the DICOM file 'fname' to stdout.

OPTIONS:
 -hex     = Include hexadecimal printout for integer values.
 -noname  = Don't include element names in the printout.
 -sexinfo = Dump Siemens EXtra INFO text (0029 1020), if present
             (can be VERY lengthy).
 -mulfram = Dump multi-frame information, if present
             (1 line per frame, plus an XML-style header/footer)
             [-mulfram also implies -noname]
 -v n     = Dump n words of binary data also.

Based on program dcm_dump_file from the RSNA, developed at
the Mallinckrodt Institute of Radiology.  See the source
code file mri_dicom_hdr.c for their Copyright and license.

SOME SAMPLE OUTPUT LINES:

0028 0010      2 [1234   ] //              IMG Rows// 512
0028 0011      2 [1244   ] //           IMG Columns// 512
0028 0030     18 [1254   ] //     IMG Pixel Spacing//0.488281\0.488281
0028 0100      2 [1280   ] //    IMG Bits Allocated// 16
0028 0101      2 [1290   ] //       IMG Bits Stored// 12
0028 0102      2 [1300   ] //          IMG High Bit// 11

* The first 2 numbers on each line are the DICOM group and element tags,
   in hexadecimal.
* The next number is the number of data bytes, in decimal.
* The next number [in brackets] is the offset in the file of the data,
   in decimal.  This is where the data bytes start, and does not include
   the tag, Value Representation, etc.
* If -noname is NOT given, then the string in the '// ... //' region is
   the standard DICOM dictionary name for this data element.  If this string
   is blank, then this element isn't in the dictionary (e.g., is a private
   tag, or an addition to DICOM that I don't know about, ...).
* The value after the last '//' is the value of the data in the element.
* In the example above, we have a 512x512 image with 0.488281 mm pixels,
   with 12 bits (stored in 16 bits) per pixel.
* For vastly more detail on DICOM standard, you can start with the
   documents at ftp://afni.nimh.nih.gov/dicom/ (1000+ pages of PDF).



AFNI program: dicom_to_raw
Usage: dicom_to_raw fname ...
Reads images from DICOM file 'fname' and writes them to raw
file(s) 'fname.raw.0001' etc.



AFNI program: djpeg
usage: /var/www/html/pub/dist/bin/linux_gcc32/djpeg [switches] [inputfile]
Switches (names may be abbreviated):
  -colors N      Reduce image to no more than N colors
  -fast          Fast, low-quality processing
  -grayscale     Force grayscale output
  -scale M/N     Scale output image by fraction M/N, eg, 1/8
  -bmp           Select BMP output format (Windows style)
  -gif           Select GIF output format
  -os2           Select BMP output format (OS/2 style)
  -pnm           Select PBMPLUS (PPM/PGM) output format (default)
  -targa         Select Targa output format
Switches for advanced users:
  -dct int       Use integer DCT method (default)
  -dct fast      Use fast integer DCT (less accurate)
  -dct float     Use floating-point DCT method
  -dither fs     Use F-S dithering (default)
  -dither none   Don't use dithering in quantization
  -dither ordered  Use ordered dither (medium speed, quality)
  -map FILE      Map to colors used in named image file
  -nosmooth      Don't use high-quality upsampling
  -onepass       Use 1-pass quantization (fast, low quality)
  -maxmemory N   Maximum memory to use (in kbytes)
  -outfile name  Specify name for output file
  -verbose  or  -debug   Emit debug output



AFNI program: ent16
Usage: ent16 [-%nn]
Computes an estimate of the entropy of stdin.
If the flag '-%75' is given (e.g.), then the
  exit status is 1 only if the input could be
  compressed at least 75%, otherwise the exit
  status is 0.  Legal values of 'nn' are 1..99.
In any case, the entropy and compression estimates
  are printed to stdout, even if no '-%nn' flag is.
  given.

METHOD: entropy is estimated by building a histogram
        of all 16 bit words in the input, then summing
        over -p[i]*log2(p[i]), i=0..65535.  Compression
        estimate seems to work pretty good for gzip -1
        in most cases of binary image data.

SAMPLE USAGE (csh syntax):
  ent16 -%75 < fred+orig.BRIK
  if( $status == 1 ) gzip -1v fred+orig.BRIK



AFNI program: fdrval
Usage: fdrval [options] dset sub val [val ...]

Reads FDR curve data from the header of dset for sub-brick
#sub and computes the q-value when the sub-brick statistical
threshold is set to val.

OPTIONS
-------
 -pval   = also output the p-value (on the same line, after q)
 -ponly  = don't output q-values, just p-values
 -qonly  = don't output p-values, just q-values [the default]

NOTES
-----
* Output for each 'val' is written to stdout.
* If the q-value can't be computed, then 1.0 is output.
* Example:
    fdrval Fred_REML+orig 0 `count -scale 0.1 10 20` | 1dplot -stdin
  Uses the 'count' program to input a sequence of values, and then
  pipes into the 1dplot program to make a graph of F vs. q.
* See the link below for information on how AFNI computes FDR curves:
    http://afni.nimh.nih.gov/pub/dist/doc/misc/FDR/FDR_Jan2008.pdf
* Also see the output of '3dFDR -help'

-- A quick hack by RWCox -- 15 Oct 2008 -- PG Wodehouse's birthday!

++ Compile date = Mar 13 2009




AFNI program: file_tool

/var/www/html/pub/dist/bin/linux_gcc32/file_tool - display or modify sections of a file

    This program can be used to display or edit data in arbitrary
    files.  If no '-mod_data' option is provided (with DATA), it
    is assumed the user wishes only to display the specified data
    (using both '-offset' and '-length', or using '-ge_XXX').

  usage: /var/www/html/pub/dist/bin/linux_gcc32/file_tool [options] -infiles file1 file2 ...

  examples:

   ----- help examples -----

   1. get detailed help:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -help

   2. get descriptions of GE struct elements:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -help_ge

   ----- GEMS 4.x and 5.x display examples -----

   1. display GE header and extras info for file I.100:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge_all -infiles I.100

   2. display GEMS 4.x series and image headers for file I.100:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge4_all -infiles I.100

   3. display run numbers for every 100th I-file in this directory

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge_uv17 -infiles I.?42
      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge_run  -infiles I.?42

   ----- general value display examples -----

   1. display the 32 characters located 100 bytes into each file:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -offset 100 -length 32 -infiles file1 file2

   2. display the 8 4-byte reals located 100 bytes into each file:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -disp_real4 -offset 100 -length 32 -infiles file1 file2

   3. display 8 2-byte hex integers, 100 bytes into each file:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -disp_hex2 -offset 100 -length 16 -infiles file1 file2

   ----- ANALYZE file checking examples -----

   1. define the field contents of an ANALYZE header

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -def_ana_hdr

   2. display the field contents of an ANALYZE file

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -disp_ana_hdr -infiles dset.hdr

   3. display field differences between 2 ANALYZE headers

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -diff_ana_hdrs -infiles dset1.hdr dset2.hdr

   4. display field differences between 2 ANALYZE headers (in HEX)

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -diff_ana_hdrs -hex -infiles dset1.hdr dset2.hdr

   5. modify some fields of an ANALYZE file

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_ana_hdr -prefix new.hdr -mod_field smin 0   \
         -mod_field descrip 'test ANALYZE file'           \
         -mod_field pixdim '0 2.1 3.1 4 0 0 0 0 0'        \
         -infiles old.hdr

   ----- script file checking examples -----

   1. in each file, check whether it is a UNIX file type

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -show_file_type -infiles my_scripts_*.txt

   2. in each file, look for spaces after trailing backslashes '\'

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -show_bad_backslash -infiles my_scripts_*.txt

   ----- character modification examples -----

   1. in each file, change the 8 characters at 2515 to 'hi there':

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data "hi there" -offset 2515 -length 8 -infiles I.*

   2. in each file, change the 21 characters at 2515 to all 'x's
      (and print out extra debug info)

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -debug 1 -mod_data x -mod_type val -offset 2515 \
                -length 21 -infiles I.*

   ----- raw number modification examples -----

  1. in each file, change the 3 short integers starting at position
     2508 to '2 -419 17'

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data '2 -419 17' -mod_type sint2 -offset 2508 \
                -length 6 -infiles I.*

  2. in each file, change the 3 binary floats starting at position
     2508 to '-83.4 2 17' (and set the next 8 bytes to zero by
     setting the length to 20, instead of just 12).

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data '-83.4 2 17' -mod_type float4 -offset 2508 \
                -length 20 -infiles I.*

  3. in each file, change the 3 binary floats starting at position
     2508 to '-83.4 2 17', and apply byte swapping

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data '-83.4 2 17' -mod_type float4 -offset 2508 \
                -length 12 -swap_bytes -infiles I.*

  notes:

    o  Use of '-infiles' is required.
    o  Use of '-length' or a GE information option is required.
    o  As of this version, only modification with text is supported.
       Editing binary data is coming soon to a workstation near you.

  special options:

    -help              : show this help information
                       : e.g. -help

    -version           : show version information
                       : e.g. -version

    -hist              : show the program's modification history

    -debug LEVEL       : print extra info along the way
                       : e.g. -debug 1
                       : default is 0, max is 2

  required 'options':

    -infiles f1 f2 ... : specify input files to print from or modify
                       : e.g. -infiles file1
                       : e.g. -infiles I.*

          Note that '-infiles' should be the final option.  This is
          to allow the user an arbitrary number of input files.

  GE info options:

      -ge_all          : display GE header and extras info
      -ge_header       : display GE header info
      -ge_extras       : display extra GE image info
      -ge_uv17         : display the value of uv17 (the run #)
      -ge_run          : (same as -ge_uv17)
      -ge_off          : display file offsets for various fields

  GEMS 4.x info options:

      -ge4_all         : display GEMS 4.x series and image headers
      -ge4_image       : display GEMS 4.x image header
      -ge4_series      : display GEMS 4.x series header
      -ge4_study       : display GEMS 4.x study header

  ANALYZE info options:

      -def_ana_hdr     : display the definition of an ANALYZE header
      -diff_ana_hdrs   : display field differences between 2 headers
      -disp_ana_hdr    : display ANALYZE headers
      -hex             : display field values in hexidecimal
      -mod_ana_hdr     : modify ANALYZE headers
      -mod_field       : specify a field and value(s) to modify

      -prefix          : specify and output filename
      -overwrite       : specify to overwrite the input file(s)

  script file options:

      -show_bad_backslash : show lines with whitespace after '\'

          This is meant to find problems in script files where the
          script programmer has spaces or tabs after a final '\'
          on the line.  That would break the line continuation.

      -show_file_type  : print file type of UNIX, Mac or DOS

          Shell scripts need to be UNIX type files.  This option
          will inform the programmer if there are end of line
          characters that define an alternate file type.

  raw ascii options:

    -length LENGTH     : specify the number of bytes to print/modify
                       : e.g. -length 17

          This includes numbers after the conversion to binary.  So
          if -mod_data is '2 -63 186', and -mod_type is 'sint2' (or
          signed shorts), then 6 bytes will be written (2 bytes for
          each of 3 short integers).

       ** Note that if the -length argument is MORE than what is
          needed to write the numbers out, the remaind of the length
          bytes will be written with zeros.  If '17' is given for
          the length, and 3 short integers are given as data, there 
          will be 11 bytes of 0 written after the 6 bytes of data.

    -mod_data DATA     : specify a string to change the data to
                       : e.g. -mod_data hello
                       : e.g. -mod_data '2 -17.4 649'
                       : e.g. -mod_data "change to this string"

          This is the data that will be writting into the modified
          file.  If the -mod_type is 'str' or 'char', then the
          output data will be those characters.  If the -mod_type
          is any other (i.e. a binary numerical format), then the
          output will be the -mod_data, converted from numerical
          text to binary.

       ** Note that a list of numbers must be contained in quotes,
          so that it will be processed as a single parameter.

    -mod_type TYPE     : specify the data type to write to the file
                       : e.g. -mod_type string
                       : e.g. -mod_type sint2
                       : e.g. -mod_type float4
                       : default is 'str'

        TYPE can be one of:

          str       : perform a string substitution
          char, val : perform a (repeated?) character substitution
          uint1     : single byte unsigned int   (binary write)
          sint1     : single byte   signed int   (binary write)
          uint2     : two    byte unsigned int   (binary write)
          sint2     : two    byte   signed int   (binary write)
          uint4     : four   byte unsigned int   (binary write)
          sint4     : four   byte   signed int   (binary write)
          float4    : four   byte floating point (binary write)
          float8    : eight  byte floating point (binary write)

          If 'str' is used, which is the default action, the data is
          replaced by the contents of the string DATA (from the
          '-mod_data' option).

          If 'char' is used, then LENGTH bytes are replaced by the
          first character of DATA, repeated LENGTH times.

          For any of the others, the list of numbers found in the
          -mod_data option will be written in the supplied binary
          format.  LENGTH must be large enough to accomodate this
          list.  And if LENGTH is higher, the output will be padded
          with zeros, to fill to the requesed length.

    -offset OFFSET     : use this offset into each file
                       : e.g. -offset 100
                       : default is 0

          This is the offset into each file for the data to be
          read or modified.

    -quiet             : do not output header information

  numeric options:

    -disp_hex          : display bytes in hex
    -disp_hex1         : display bytes in hex
    -disp_hex2         : display 2-byte integers in hex
    -disp_hex4         : display 4-byte integers in hex

    -disp_int2         : display 2-byte integers
    -disp_int4         : display 4-byte integers

    -disp_real4        : display 4-byte real numbers

    -swap_bytes        : use byte-swapping on numbers

          If this option is used, then byte swapping is done on any
          multi-byte numbers read from or written to the file.

  - R Reynolds, version: 3.8 (June 19, 2008), compiled: Mar 13 2009




AFNI program: float_scan
Usage: float_scan [options] input_filename
Scans the input file of IEEE floating point numbers for
illegal values: infinities and not-a-number (NaN) values.

Options:
  -fix     = Writes a copy of the input file to stdout (which
               should be redirected using '>'), replacing
               illegal values with 0.  If this option is not
               used, the program just prints out a report.
  -v       = Verbose mode: print out index of each illegal value.
  -skip n  = Skip the first n floating point locations
               (i.e., the first 4*n bytes) in the file

N.B.: This program does NOT work on compressed files, nor does it
      work on byte-swapped files (e.g., files transferred between
      Sun/SGI/HP and Intel platforms), nor does it work on images
      stored in the 'flim' format!

The program 'exit status' is 1 if any illegal values were
found in the input file.  If no errors were found, then
the exit status is 0. You can check the exit status by
using the shell variable $status.  A C-shell example:
   float_scan fff
   if ( $status == 1 ) then
      float_scan -fix fff > Elvis.Aaron.Presley
      rm -f fff
      mv Elvis.Aaron.Presley fff
   endif



AFNI program: from3d
++ from3d: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: B. Douglas Ward
Usage:   from3d [options] -input fname -prefix rname
Purpose: Extract 2D image files from a 3D AFNI dataset.
Options:
-v             Print out verbose information during the run.
-nsize         Adjust size of 2D data file to be NxN, by padding
                 with zeros, where N is a power of 2.
-raw           Write images in 'raw' format (just the data bytes)
                 N.B.: there will be no header information saying
                       what the image dimensions are - you'll have
                       to get that information from the x and y
                       axis information output by 3dinfo.
-float         Write images as floats, no matter what they are in
                 the dataset itself.
-zfirst num    Set 'num' = number of first z slice to be extracted.
                 (default = 1)
-zlast num     Set 'num' = number of last z slice to be extracted.
                 (default = largest)
-tfirst num    Set 'num' = number of first time slice to be extracted.
                 (default = 1)
-tlast num     Set 'num' = number of last time slice to be extracted.
                 (default = largest)
-input fname   Read 3D dataset from file 'fname'.
                 'fname' may include a sub-brick selector list.
-prefix rname  Write 2D images using prefix 'rname'.

               (-input and -prefix are non-optional options: they)
               (must be present or the program will not execute. )

N.B.: * Image data is extracted directly from the dataset bricks.
         If a brick has a floating point scaling factor, it will NOT
         be applied.
      * Images are extracted parallel to the xy-plane of the dataset
         orientation (which can be determined by program 3dinfo).
         This is the order in which the images were input to the
         dataset originally, via to3d.
      * If either of these conditions is unacceptable, you can also
         try to use the Save:bkg function from an AFNI image window.



AFNI program: ftosh
ftosh: convert float images to shorts, by RW Cox
Usage: ftosh [options] image_files ...

 where the image_files are in the same format to3d accepts
 and where the options are

  -prefix pname:  The output files will be named in the format
  -suffix sname:  'pname.index.sname' where 'pname' and 'sname'
  -start  si:     are strings given by the first 2 options.
  -step   ss:     'index' is a number, given by 'si+(i-1)*ss'
                  for the i-th output file, for i=1,2,...
              *** Default pname = 'sh'
              *** Default sname = nothing at all
              *** Default si    = 1
              *** Default ss    = 1

  -nsize:         Enforce the 'normal size' option, to make
                  the output images 64x64, 128x128, or 256x256.

  -scale sval:    'sval' and 'bval' are numeric values; if
  -base  bval:    sval is given, then the output images are
  -top   tval:    formed by scaling the inputs by the formula
                  'output = sval*(input-bval)'.
              *** Default sval is determined by finding
                  V = largest abs(input-bval) in all the input
                  images and then sval = tval / V.
              *** Default tval is 32000; note that tval is only
                  used if sval is not given on the command line.
              *** Default bval is 0.



AFNI program: ge_header
Usage: ge_header [-verb] file ...
Prints out information from the GE image header of each file.
Options:
 -verb: Print out some probably useless extra stuff.



AFNI program: gen_epi_review.py

===========================================================================
gen_epi_review.py:

    This program will generate an AFNI processing script that can be used
    to review EPI data (possibly called @review_epi_data).

    The @review_epi_data script is meant to provide an easy way to quickly
    review the (preferably un-altered) EPI data.  It runs afni and then a
    looping set of drive_afni commands.

    Note that there should not be another instance of 'afni' running on
    the system when the script is run, as 'drive_afni' will communicate
    with only the first invoked 'afni' program.

    The most simple usage comes with the -dsets option, along with the
    necessary pieces of the gen_epi_review.py command.

--------------------------------------------------
examples:

    These examples assume the EPI dataset names produced as a result of
    the afni_proc.py processing script proc.sb23.blk, produced by the
    command in AFNI_data4/s1.afni_proc.block, provided with the class data.

    Yes, that means running the s1.afni_proc.block (tcsh) script to call
    the afni_proc.py (python) script to produce the proc.sb23.blk (tcsh)
    script, which calls the gen_epi_review.py (python) script to produce
    the @review_epi_data (tcsh) script, which can be run to review your EPI 
    data.  Ahhhhhhh...  :)

    Note that when using wildcards, the datasets must exist in the current
    directory.  But when using the {1,2,..} format, the files do not yet
    need to exist.  So command #2 could be run anywhere and still create the
    same script, no data needed.

    1. simple usage, just providing datasets (and general options)

        gen_epi_review.py -dsets pb00.sb23.blk.r??.tcat+orig.HEAD

    2. expand 5 runs with shell notation, rather than wildcards, and
       specify an alternate script name

        gen_epi_review.py -dsets pb00.sb23.blk.r{1,2,3,4,5}.tcat        \
                -script @review_epi_5runs

    3. choose to see all three image windows

        gen_epi_review.py -dsets pb00.sb23.blk.r*.tcat+orig.HEAD        \
                -windows sagittal axial coronal                         \
                -script @review_epi_windows

    4. specify the graph size and position (can do the same for image windows)

        gen_epi_review.py -dsets pb00.sb23.blk.r*.tcat+orig.HEAD        \
                -gr_size 600 450 -gr_xoff 100 -gr_yoff 200              \
                -script @review_epi_posn

----------------------------------------------------------------------
OPTIONS:
----------------------------------------------------------------------
informational arguments:

    -help                       : display this help
    -hist                       : display the modification history
    -show_valid_opts            : display all valid options (short format)
    -ver                        : display the version number

----------------------------------------
required argument:

    -dsets dset1 dset2 ...      : specify input datasets for processing

        e.g. -dsets epi_r*+orig.HEAD

        This option is used to provide a list of datasets to be processed
        in the resulting script.

----------------------------------------
optional arguments:

    -script SCRIPT_NAME         : specify the name of the generated script

        e.g. -script review.epi.subj23

        By default, the script name will be '@' followed by the name used
        for the '-generate' option.  So when using '-generate review_epi_data',
        the default script name will be '@review_epi_data'.

        This '-script' option can be used to override the default.

    -verb LEVEL                 : specify a verbosity level

        e.g. -verb 3

        Use this option to print extra information to the screen

    -windows WIN1 WIN2 ...      : specify the image windows to open

        e.g. -windows sagittal axial

        By default, the script will open 2 image windows (sagittal and axial).
        This option can be used to specify exactly which windows get opened,
        and in which order.

        Acceptable window names are: sagittal, axial, coronal

----------------------------------------
geometry arguments (optional):

    -im_size dimX dimY          : set image dimensions, in pixels

        e.g. -im_size 300 300

        Use this option to alter the size of the image windows.  This
        option takes 2 parameters, the pixels in the X and Y directions.

    -im_xoff XOFFSET            : set the X-offset for the image, in pixels

        e.g. -im_xoff 420

        Use this option to alter the placement of images along the x-axis.
        Note that the x-axis is across the screen, from left to right.

    -im_yoff YOFFSET            : set the Y-offset for the image, in pixels

        e.g. -im_xoff 400

        Use this option to alter the placement of images along the y-axis.
        Note that the y-axis is down the screen, from top to bottom.

    -gr_size dimX dimY          : set graph dimensions, in pixels

        e.g. -gr_size 400 300

        Use this option to alter the size of the graph window.  This option
        takes 2 parameters, the pixels in the X and Y directions.

    -gr_xoff XOFFSET            : set the X-offset for the graph, in pixels

        e.g. -gr_xoff 0

        Use this option to alter the placement of the graph along the x-axis.
        Note that the x-axis is across the screen, from left to right.

    -gr_yoff YOFFSET            : set the Y-offset for the graph, in pixels

        e.g. -gr_xoff 400

        Use this option to alter the placement of the graph along the y-axis.
        Note that the y-axis is down the screen, from top to bottom.


- R Reynolds  June 27, 2008
===========================================================================




AFNI program: gifti_tool
------------------------------------------------------------
gifti_tool  - create, display, modify or compare GIFTI datasets

  general examples:

    1. read in a GIFTI dataset (set verbose level?  show GIFTI dataset?)

         gifti_tool -infile dset.gii
         gifti_tool -infile dset.gii -verb 3
         gifti_tool -infile dset.gii -show_gifti

    2. copy a GIFTI dataset

      a. create a simple copy, and check for differences

         gifti_tool -infile dset.gii -write_gifti copy.gii
         diff dset.gii copy.gii

      b. copy only 3 DataArray indices: 4, 0, 5

         gifti_tool -infile time_series.gii -write_gifti ts3.gii  \
                    -read_DAs 4 0 5
               OR

         gifti_tool -infile time_series.gii'[4,0,5]'  \
                    -write_gifti ts3.gii

    3. write datasets in other formats

      a. FreeSurfer-style .asc surface dataset

         gifti_tool -infile pial.gii -write_asc pial.asc

      b. .1D time series surface dataset

         gifti_tool -infile time_series.gii -write_1D ts.1D

    4. create a new gifti dataset from nothing, where

      a. - the dataset has 3 DataArray elements
         - the data will be of type 'short' (NIFTI_TYPE_INT16)
         - the intent codes will reflect a t-test
         - the data will be 2-dimensional (per DataArray), 5 by 2 shorts
         - memory will be allocated for the data (a modification option)
         - the result will be written to created.gii

         gifti_tool -new_dset                                \
                    -new_numDA 3 -new_dtype NIFTI_TYPE_INT16 \
                    -new_intent NIFTI_INTENT_TTEST           \
                    -new_ndim 2 -new_dims 5 2 0 0 0 0        \
                    -mod_add_data -write_gifti created.gii

      b. - the dataset has 12 DataArray elements (40 floats each)
         - the data is partitioned over 2 files (so 6*40 floats in each)

           ** Note: since dataset creation does not add data (without
                    -mod_add_data), this operation will not create or
                    try to overwrite the external datafiles.

         gifti_tool -new_dset -new_numDA 12                   \
                    -new_ndim 1 -new_dims 40 0 0 0 0 0        \
                    -set_extern_filelist ext1.bin ext2.bin    \
                    -write_gifti points_to_extern.gii

    5. modify a gifti dataset

      a. apply various modifications at the GIFTI level and to all DAs

         - set the Version attribute at the GIFTI level
         - set 'Date' as GIFTI MetaData, with value of today's date
         - set 'Description' as GIFTI MetaData, with some value
         - set all DA Intent attributes to be an F-test
         - set 'Name' as an attribute of all DAs, with some value
         - read created.gii, and write to first_mod.gii

         gifti_tool -mod_gim_atr Version 1.0                       \
                    -mod_gim_meta Date "`date`"                    \
                    -mod_gim_meta Description 'modified surface'   \
                    -mod_DA_atr Intent NIFTI_INTENT_FTEST          \
                    -mod_DA_meta Name 'same name for all DAs'      \
                    -infile created.gii -write_gifti first_mod.gii

      b. modify the 'Name' attribute is DA index #42 only

         gifti_tool -mod_DA_meta Name 'data from pickle #42'       \
                    -mod_DAs 42                                    \
                    -infile stats.gii -write_gifti mod_stats.gii

      c. set the data to point to a single external data file, without
         overwriting the external file on write (so use -no_data), 
         and where the DataArrays will point to sequential partitions
         of the file

         gifti_tool -infiles created.gii -no_data          \
                    -set_extern_filelist ex_data.bin       \
                    -write_gifti extern.gii

    6. compare 2 gifti datasets
       (compare GIFTI structures, compare data, and report all diffs)

         gifti_tool -compare_gifti -compare_data -compare_verb 3 \
                    -infiles created.gii first_mod.gii

    7. copy MetaData from one dataset to another
       (any old Value will be replaced if the Name already exists)

         - copy every (ALL) MetaData element at the GIFTI level
         - copy MetaData named 'Label' per DataArray element
         - only apply DataArray copies to indices 0, 3 and 6
         - first input file is the source, second is the destination
         - write the modified 'destination.gii' dataset to meta_copy.gii

         gifti_tool -copy_gifti_meta ALL                   \
                    -copy_DA_meta Label                    \
                    -DA_index_list 0 3 6                   \
                    -infiles source.gii destination.gii    \
                    -write_gifti meta_copy.gii

----------------------------------------------------------------------

  (all warranties are void in Montana, and after 4 pm)

----------------------------------------------------------------------
  informational options:

     -help             : display this help
     -hist             : display the modification history of gifti_tool
     -ver              : display the gifti_tool version
     -gifti_hist       : display thd modification history of gifticlib
     -gifti_ver        : display gifticlib version
     -gifti_dtd_url    : display the gifti DTD URL
     -gifti_zlib       : display whether the zlib is linked in library

  ----------------------------------------
  general/input options

     -b64_check   TYPE : set method for checking base64 errors

           e.g. -b64_check COUNT

           This option sets the preference for how to deal with errors
           in Base64 encoded data (whether compressed or not).  The
           default is SKIPnCOUNT, which skips any illegal characters,
           and reports a count of the number found.

               TYPE = NONE       : no checks - assume all is well
               TYPE = DETECT     : report whether errors were found
               TYPE = COUNT      : count the number of bad chars
               TYPE = SKIP       : ignore any bad characters
               TYPE = SKIPnCOUNT : ignore but count bad characters

           This default adds perhaps 10% to the reading time.

     -buf_size    SIZE : set the buffer size (given to expat library)

           e.g. -buf_size 1024

     -DA_index_list I0 I1 ... : specify a list of DataArray indices

           e.g. -DA_index_list 0
           e.g. -DA_index_list 0 17 19

           This option is used to specify a list of DataArray indices
           for use via some other option (such as -copy_DA_meta).

           Each DataArray element corresponding to one of the given
           indices will have the appropriate action applied, such as
           copying a given MetaData element from the source dataset
           to the destination dataset.

           Note that this differs from -read_DAs, which specifies which
           DataArray elements to even read in.  Both options could be
           used in the same command, such as if one wanted to copy the
           'Name' MetaData from index 17 of a source dataset into the
           MetaData of the first DataArray in a dataset with only two
           DataArray elements.

           e.g. gifti_tool -infiles source.gii dest.gii        \
                           -write_gifti new_dest.gii           \
                           -copy_DA_meta Name                  \
                           -read_DAs 17 17                     \
                           -DA_index_list 0

           Note that DA_index_list applies to the indices _after_ the
           datasets are read in.

     -gifti_test       : test whether each gifti dataset is valid

           This performs a consistency check on each input GIFTI
           dataset.  Lists and dimensions must be consistent.

     -infile     INPUT : specify one or more GIFTI datasets as input

           e.g. -input pial.gii
           e.g. -input run1.gii run2.gii
           e.g. -input MAKE_IM                 (create a new image)
           e.g. -input run1.gii'[3,4,5]'       (read DAs 3,4,5    )
           e.g. -input run1.gii'[0..16(2)]'    (read evens from 0 to 16)
           e.g. -input run1.gii'[4..$]'        (read all but 0..3)

           There are 2 special ways to specify input.  One is via the
           name 'MAKE_IM'.  That 'input' filename tell gifti_tool to
           create a new dataset, applying any '-new_*' options to it.

               (refer to options: -new_*)

           The other special way is to specify which DataArray elements
           should be read in, using AFNI-style syntax within '[]'.  The
           quotes prevent the shell from interpretting the brackets.

           DataArray indices are zero-based.

           The list of DAs can be comma-delimited, and can use '..' or
           '-' to specify a range, and a value in parentheses to be used
           as a step.  The '$' character means the last index (numDA-1).

     -no_data          : do not read in data

           This option means not to read in the Data element in any
           DataArray, akin to reading only the header.

     -no_updates       : do not allow the library to modify metadata

           By default, the library may update some metadata fields, such
           as 'gifticlib-version'.  The -no_updates option will prevent
           that operation.

     -read_DAs s0 ...  : read DataArray list indices s0,... from input

           e.g. -read_DAs 0 4 3 3 8
           e.g. -input run1.gii -read_DAs 0 2 4 6 8
           e.g. -input run1.gii'[0..8(2)]'              (same effect)

           Specify a list of DataArray indices to read.  This is a
           simplified form of using brackets '[]' with -input names.

     -show_gifti       : show final gifti image

           Display all of the dataset information on the screen (sans
           data).  This includes meta data and all DataArray elements.

     -verb        VERB : set verbose level   (default: 1)

           e.g. -verb 2

           Pring extra information to the screen.  The VERB level can
           be from 0 to 8, currently.

           Level 0 is considered 'quiet' mode, and should only report
           serious errors.  Level 1 is the default.

  ----------------------------------------
  output options

     -encoding    TYPE : set the data encoding for any output file

           e.g. -encoding BASE64GZIP

               TYPE = ASCII      : ASCII encoding
               TYPE = BASE64     : base64 binary
               TYPE = BASE64GZIP : base64 compressed binary

           This operation can also be performed via -mod_DA_atr:
           e.g. -mod_DA_atr Encoding BASE64GZIP

     -write_1D    DSET : write out data to AFNI style 1D file

           e.g. -write_1D stats.1D

           Currently, all DAs need to be of the same datatype.  This
           restriction could be lifted if there is interest.

     -write_asc   DSET : write out geometry to FreeSurfer style ASC file

           e.g. -write_asc pial.asc

           To write a surface file in FreeSurfer asc format, it must
           contain DataArray elements of intent NIFTI_INTENT_POINTSET
           and NIFTI_INTENT_TRIANGLE.  The POINTSET data is written as
           node coordinates and the TRIANGLE data as triangles (node
           index triplets).

     -write_gifti DSET : write out dataset as gifti image

           e.g. -write_gifti new.pial.gii

     -zlevel     LEVEL : set compression level (-1 or 0..9)

           This option sets the compression level used by zlib.  Some
           LEVEL values are noteworthy:

              -1   : specify to use the default of zlib (currently 6)
               0   : no compression (but still needs a few extra bytes)
               1   : fastest but weakest compression
               6   : default (good speed/compression trade-off)
               9   : slowest but strongest compression

  ----------------------------------------
  modification options

     These modification options will affect every DataArray element
     specified by the -mod_DAs option.  If the option is not used,
     then ALL DataArray elements will be affected.

     -mod_add_data     : add data to empty DataArray elements

           Allocate data in every DataArray element.  Datasets can be
           created without any stored data.  This will allocate data
           and fill it with zeros of the given type.

     -mod_DA_atr  NAME VALUE : set the NAME=VALUE attribute pair

           e.g. -mod_DA_atr Intent NIFTI_INTENT_ZSCORE

           This option will set the DataArray attribute corresponding
           to NAME to the value, VALUE.  Attribute name=value pairs are
           specified in the gifti DTD (see -gifti_dtd_url).

           One NAME=VALUE pair can be specified per -mod_DA_atr
           option.  Multiple -mod_DA_atr options can be used.

     -mod_DA_meta NAME VALUE : set the NAME=VALUE pair in DA's MetaData

           e.g. -mod_DA_meta Description 'the best dataset, ever'

           Add a MetaData entry to each DataArray element for this
           NAME and VALUE.  If 'NAME' already exists, the old value
           is replaced by VALUE.

     -mod_DAs i0 i1 ...      : specify the set of DataArrays to modify

           e.g. -mod_DAs 0 4 5

           Specify the list of DataArray elements to modify.  All the
           -mod_* options apply to this list of DataArray indices.  If
           no -mod_DAs option is used, the operations apply to ALL
           DataArray elements.

           Note that the indices are zero-based, 0 .. numDA-1.

     -mod_gim_atr  NAME VALUE : set the GIFTI NAME=VALUE attribute pair

           e.g. -mod_gim_atr Version 3.141592

           Set the GIFTI element attribute correponding to NAME to the
           value, VALUE.

           Given that numDA is computed and version will rarely change,
           this option will probably not feel much love.

     -mod_gim_meta NAME VALUE : add this pair to the GIFTI MetaData

           e.g. -mod_gim_meta date "`date`"

           Add a MetaData entry to each DataArray element for this
           NAME and VALUE pair.  If NAME exists, VALUE will replace
           the old value.

     -mod_to_float            : change all DataArray data to float

           Convert all DataArray elements of all datasets to datatype
           NIFTI_TYPE_FLOAT32 (4-byte floats).  If the data does not
           actually exist, only the attribute will be set.  Otherwise
           all of the data will be converted.  There are some types
           for which this operation may not be appropriate.

  ----------------------------------------

  creation (new dataset) options

     -new_dset         : create a new GIFTI dataset
     -new_numDA  NUMDA : new dataset will have NUMDA DataArray elements
                         e.g. -new_numDA 3
     -new_intent INTENT: DA elements will have intent INTENT
                         e.g. -new_intent NIFTI_INTENT_FTEST
     -new_dtype   TYPE : set datatype to TYPE
                         e.g. -new_dtype NIFTI_TYPE_FLOAT32
     -new_ndim NUMDIMS : set Dimensionality to NUMDIMS (see -new_dims)
     -new_dims D0...D5 : set dims[] to these 6 values
                         e.g. -new_ndim 2 -new_dims 7 2 0 0 0 0
     -new_data         : allocate space for data in created dataset

  ----------------------------------------
  comparison options

     -compare_gifti           : specifies to compare two GIFTI datasets

           This compares all elements of the two GIFTI structures.
           The attributes, LabelTabels, MetaData are compared, and then
           each of the included DataArray elements.  All sub-structures
           of the DataArrays are compared, except for the actual 'data',
           which requires the '-compare_data' flag.

           There must be exactly 2 input datasets to use this option.
           See example #7 for sample usage.

     -compare_data            : flag to request comparison of the data

           Data comparison is done per DataArray element.

           Comparing data is a separate operation from comparing GIFTI.
           Neither implies the other.

     -compare_verb LEVEL      : set the verbose level of comparisons

           Data comparison is done per DataArray element.  Setting the
           verb level will have the following effect:

           0 : quiet, only return whether there was a difference
           1 : show whether there was a difference
           2 : show whether there was a difference per DataArray
           3 : show all differences

  ----------------------------------------
  MetaData copy options

     -copy_gifti_meta MD_NAME      : copy MetaData with name MD_NAME

           e.g. -copy_gifti_meta AFNI_History

           Copy the MetaData with the given name from the first input
           dataset to the second (last).  This applies to MetaData at
           the GIFTI level (not in the DataArray elements).

     -copy_DA_meta MD_NAME         : copy MetaData with name MD_NAME

           e.g. -copy_DA_meta intent_p1

           Copy the MetaData with the given name from the first input
           dataset to the second (last).  This applies to MetaData at
           DataArray level.

           This will apply to all DataArray elements, unless the
           -DA_index_list option is used to specify a zero-based
           index list.

           see also -DA_index_list

------------------------------------------------------------
see the GIfTI community web site at:

           http://www.nitrc.org/projects/gifti

R Reynolds, National Institues of Health
------------------------------------------------------------



AFNI program: gui_xmat.py
** python module not found: numpy
** python module not found: wx
** python module not found: matplotlib

     -- for details, consider xmat_tool -test_libs
     -- also, many computations do not require the GUI
        (e.g. 'xmat_tool -load_xmat X.xmat.1D -show_cormat_warnings')
   



AFNI program: im2niml
Usage: im2niml imagefile [imagefile ...]
Converts the input image(s) to a text-based NIML element
and writes the result to stdout.  Sample usage:
 aiv -p 4444 &
 im2niml zork.jpg | nicat tcp:localhost:4444
-- Author: RW Cox.



AFNI program: imand
Usage: imand [-thresh #] input_images ... output_image
* Only pixels nonzero in all input images
* (and above the threshold, if given) will be output.



AFNI program: imaver
Usage: imaver out_ave out_sig input_images ...
       (use - to skip output of out_ave and/or out_sig)
* Computes the mean and standard deviation, pixel-by-pixel,
   of a whole bunch of images.
* Write output images in 'short int' format if inputs are
   short ints, otherwise output images are floating point.



AFNI program: imcalc
Do arithmetic on 2D images, pixel-by-pixel.
Usage: imcalc options
where the options are:
  -datum type = Coerce the output data to be stored as the given type,
                  which may be byte, short, or float.
                  [default = datum of first input image on command line]
  -a dname    = Read image 'dname' and call the voxel values 'a'
                  in the expression.  'a' may be any letter from 'a' to 'z'.
               ** If some letter name is used in the expression, but not
                  present in one of the image options here, then that
                  variable is set to 0.
  -expr "expression"
                Apply the expression within quotes to the input images,
                  one voxel at a time, to produce the output image.
                  ("sqrt(a*b)" to compute the geometric mean, for example)
  -output name = Use 'name' for the output image filename.
                  [default='imcalc.out']

See the output of '3dcalc -help' for details on what kinds of expressions
are possible.  Note that complex-valued images cannot be processed (byte,
short, and float are OK).



AFNI program: imcat
Usage: imcat [options] fname1 fname2 etc.
Puts a set images into an image matrix (IM) 
 montage of NX by NY images.
 The minimum set of input is N images (N >= 1).
 If need be, the default is to reuse images until the desired
 NX by NY size is achieved. 
 See options -zero_wrap and -image_wrap for more detail.
 
OPTIONS:
 ++ Options for editing, coloring input images:
  -scale_image SCALE_IMG: Multiply each image IM(i,j) in output
                          image matrix IM by the color or intensity
                          of the pixel (i,j) in SCALE_IMG.
  -scale_intensity: Instead of multiplying by the color of 
                          pixel (i,j), use its intensity 
                          (average color)
  -rgb_out: Force output to be in rgb, even if input is bytes.
            This option is turned on automatically in certain cases.
  -res_in RX RY: Set resolution of all input images to RX by RY pixels.
                 Default is to make all input have the same
                 resolution as the first image.
  -crop L R T B: Crop images by L (Left), R (Right), T (Top), B (Bottom)
                 pixels. Cutting is performed after any resolution change, 
                 if any, is to be done.
 ++ Options for output:
  -zero_wrap: If number of images is not enough to fill matrix
              blank images are used.
  -image_wrap: If number of images is not enough to fill matrix
               images on command line are reused (default)
  -prefix ppp = Prefix the output files with string 'ppp'
  -matrix NX NY: Specify number of images in each row and column 
                 of IM at the same time. 
  -nx NX: Number of images in each row (3 for example below)
  -ny NY: Number of images in each column (4 for example below)
      Example: If 12 images appearing on the command line
               are to be assembled into a 3x4 IM matrix they
               would appear in this order:
                 0  1  2
                 3  4  5
                 6  7  8
                 9  10 11
    NOTE: The program will try to guess if neither NX nor NY 
          are specified.
  -matrix_from_scale: Set NX and NY to be the same as the 
                      SCALE_IMG's dimensions. (needs -scale_image)
  -gap G: Put a line G pixels wide between images.
  -gap_col R G B: Set color of line to R G B values.
                  Values range between 0 and 255.

Example 0 (assuming afni is in ~/abin directory):
   Resizing an image:
   imcat -prefix big -res_in 1024 1024 \
         ~/abin/face_zzzsunbrain.jpg 
   imcat -prefix small -res_in 64 64 \
         ~/abin/face_zzzsunbrain.jpg 
   aiv small.ppm big.ppm 

Example 1:
   Stitching together images:
    (Can be used to make very high resolution SUMA images.
     Read about 'Ctrl+r' in SUMA's GUI help.)
   imcat -prefix cat -matrix 14 12 \
         ~/abin/face_*.jpg
   aiv cat.ppm

Example 2 (assuming afni is in ~/abin directory):
   imcat -prefix bigcat -scale_image ~/abin/face_rwcox.jpg \
         -matrix_from_scale -rgb_out -res_in 32 32 ~/abin/face_*.jpg 
   aiv   bigcat.ppm bigcat.ppm 
   Crop/Zoom in to see what was done. In practice, you want to use
   a faster image viewer to examine the result. Zooming on such
   a large image is not fast in aiv.
   Be careful with this toy. Images get real big, real quick.

You can look at the output image file with
  afni -im ppp.ppm  [then open the Sagittal image window]




AFNI program: imcutup
Usage: imcutup [options] nx ny fname1
Breaks up larger images into smaller image files of size
nx by ny pixels.  Intended as an aid to using image files
which have been catenated to make one big 2D image.
OPTIONS:
  -prefix ppp = Prefix the output files with string 'ppp'
  -xynum      = Number the output images in x-first, then y [default]
  -yxnum      = Number the output images in y-first, then x
  -x.ynum     = 2D numbering, x.y format
  -y.xnum     = 2D numbering, y.x format
For example:
  imcutup -prefix Fred 64 64 3D:-1:0:256:128:1:zork.im
will break up the big 256 by 128 image in file zork.im
into 8 images, each 64 by 64.  The output filenames would be
  -xynum  => Fred.001 Fred.002 Fred.003 Fred.004
             Fred.005 Fred.006 Fred.007 Fred.008

  -yxnum  => Fred.001 Fred.003 Fred.005 Fred.007
             Fred.002 Fred.004 Fred.006 Fred.008

  -x.ynum => Fred.001.001 Fred.002.001 Fred.003.001 Fred.004.001
             Fred.001.002 Fred.002.002 Fred.003.002 Fred.004.002

  -y.xnum => Fred.001.001 Fred.001.002 Fred.001.003 Fred.001.004
             Fred.002.001 Fred.002.002 Fred.002.003 Fred.002.004

You may want to look at the input image file with
  afni -im fname  [then open the Sagittal image window]
before deciding on what to do with the image file.

N.B.: the file specification 'fname' must result in a single
      input 2D image - multiple images can't be cut up in one
      call to this program.



AFNI program: imdump
Usage: imdump input_image
* Prints out nonzero pixels in an image;
* Results to stdout; redirect (with >) to save to a file;
* Format: x-index y-index value, one pixel per line.



AFNI program: immask
Usage: immask [-thresh #] [-mask mask_image] [-pos] input_image output_image
* Masks the input_image and produces the output_image;
* Use of -thresh # means all pixels with absolute value below # in
   input_image will be set to zero in the output_image
* Use of -mask mask_image means that only locations that are nonzero
   in the mask_image will be nonzero in the output_image
* Use of -pos means only positive pixels from input_image will be used
* At least one of -thresh, -mask, -pos must be used; more than one is OK.



AFNI program: imreg
Usage: imreg [options] base_image image_sequence ...
 * Registers each 2D image in 'image_sequence' to 'base_image'.
 * If 'base_image' = '+AVER', will compute the base image as
   the average of the images in 'image_sequence'.
 * If 'base_image' = '+count', will use the count-th image in the
   sequence as the base image.  Here, count is 1,2,3, ....

OUTPUT OPTIONS:
  -nowrite        Don't write outputs, just print progress reports.
  -prefix pname   The output files will be named in the format
  -suffix sname   'pname.index.sname' where 'pname' and 'sname'
  -start  si      are strings given by the first 2 options.
  -step   ss      'index' is a number, given by 'si+(i-1)*ss'
                  for the i-th output file, for i=1,2,...
                *** Default pname = 'reg.'
                *** Default sname = nothing at all
                *** Default si    = 1
                *** Default ss    = 1

  -flim           Write output in mrilib floating point format
                  (which can be converted to shorts using program ftosh).
                *** Default is to write images in format of first
                    input file in the image_sequence.
  -keepsize       Preserve the original image size on output.
                  Default is without this option, and results
                  in images that are padded to be square.

  -quiet          Don't write progress report messages.
  -debug          Write lots of debugging output!

  -dprefix dname  Write files 'dname'.dx, 'dname'.dy, 'dname'.phi
                    for use in time series analysis.

ALIGNMENT ALGORITHMS:
  -bilinear       Uses bilinear interpolation during the iterative
                    adjustment procedure, rather than the default
                    bicubic interpolation. NOT RECOMMENDED!
  -modes c f r    Uses interpolation modes 'c', 'f', and 'r' during
                    the coarse, fine, and registration phases of the
                    algorithm, respectively.  The modes can be selected
                    from 'bilinear', 'bicubic', and 'Fourier'.  The
                    default is '-modes bicubic bicubic bicubic'.
  -mlcF           Equivalent to '-modes bilinear bicubic Fourier'.

  -wtim filename  Uses the image in 'filename' as a weighting factor
                    for each voxel (the larger the value the more
                    importance is given to that voxel).

  -dfspace[:0]    Uses the 'iterated diffential spatial' method to
                    align the images.  The optional :0 indicates to
                    skip the iteration of the method, and to use the
                    simpler linear differential spatial alignment method.
                    ACCURACY: displacments of at most a few pixels.
                *** This is the default method (without the :0).

  -cmass            Initialize the translation estimate by aligning
                    the centers of mass of the images.
              N.B.: The reported shifts from the registration algorithm
                    do NOT include the shifts due to this initial step.

The new two options are used to play with the -dfspace algorithm,
which has a 'coarse' fit phase and a 'fine' fit phase:

  -fine blur dxy dphi  Set the 3 'fine' fit parameters:
                         blur = FWHM of image blur prior to registration,
                                  in pixels [must be > 0];
                         dxy  = convergence tolerance for translations,
                                  in pixels;
                         dphi = convergence tolerance for rotations,
                                  in degrees.

  -nofine              Turn off the 'fine' fit algorithm. By default, the
                         algorithm is on, with parameters 1.0, 0.07, 0.21.



AFNI program: imrotate
Usage: imrotate [-linear | -Fourier] dx dy phi input_image output_image
Shifts and rotates an image:
  dx pixels rightwards (not necessarily an integer)
  dy pixels downwards
  phi degrees clockwise
  -linear means to use bilinear interpolation (default is bicubic)
  -Fourier means to use Fourier interpolaion
Values outside the input_image are taken to be zero.



AFNI program: imstack
Usage: imstack [options] image_filenames ...
Stacks up a set of 2D images into one big file (a la MGH).
Options:
  -datum type   Converts the output data file to be 'type',
                  which is either 'short' or 'float'.
                  The default type is the type of the first image.
  -prefix name  Names the output files to be 'name'.b'type' and 'name'.hdr.
                  The default name is 'obi-wan-kenobi'.



AFNI program: imstat
Calculation of statistics of one or more images.
Usage: imstat [-nolabel] [-pixstat prefix] [-quiet] image_file ...
  -nolabel        = don't write labels on each file's summary line
  -quiet          = don't print statistics for each file
  -pixstat prefix = if more than one image file is given, then
                     'prefix.mean' and 'prefix.sdev' will be written
                     as the pixel-wise statistics images of the whole
                     collection.  These images will be in the 'flim'
                     floating point format.  [This option only works
                     on 2D images!]



AFNI program: imupsam
Usage: imupsam [-A] n input_image output_image

*** Consider using the newer imcat for resampling.
    byte and rgb images

* Upsamples the input 2D image by a factor of n and
    writes result into output_image; n must be an
    integer in the range 2..30.
* 7th order polynomial interpolation is used in each
    direction.
* Inputs can be complex, float, short, PGM, PPM, or JPG.
* If input_image is in color (PPM or JPG), output will
    be PPM unless output_image ends in '.jpg'.
* If output_image is '-', the result will be written
    to stdout (so you could pipe it into something else).
* The '-A' option means to write the result in ASCII
    format: all the numbers for the file are output,
    and nothing else (no header info).
* Author: RW Cox -- 16 April 1999.



AFNI program: inspec

Usage: inspec <-spec specfile> 
              [-detail d] [-prefix newspecname] [-h/-help]
Outputs information found from specfile.
    -spec specfile: specfile to be read
    -prefix newspecname: rewrite spec file.
    -detail d: level of output detail default is 1.
               Available levels are 1, 2 and 3.
    -h or -help: This message here.
++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

      Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov 
     Dec 2 03




AFNI program: lib_matplot.py
** python module not found: numpy
** python module not found: wx
** python module not found: matplotlib

     -- for details, consider xmat_tool -test_libs
     -- also, many computations do not require the GUI
        (e.g. 'xmat_tool -load_xmat X.xmat.1D -show_cormat_warnings')
   



AFNI program: lpc_align.py

    ===========================================================================
    This script is used to align an anatomical T1 to an epi (T2/T2*) volume.
    Alignment parameters are set presuming the two volumes to be in close
    alignment. 
    The script resamples and removes the skull off of anat and epi volumes. A
    3dAllineate command is then executed.
    
    Basic Usage:
      lpc_align.py -anat ANAT   -epi EPI   
    
    Extra Options:
      -big_move : indicates that large displacement is needed to align the
                  two volumes. This option is off by default.
      -partial_coverage: indicates that the EPI dataset covers a part of the 
                         brain.    
    
    The script outputs the following:
        ANAT_alepi: A version of the anatomy that is aligned to the epi 
        anat2epi.aff12.1D: A transformation matrix to align anatomy to the epi 
         You can use this transform or its inverse with programs such as
         3dAllineate -cubic -1Dmatrix_apply anat2epi.aff12.1D \
                     -prefix ANAT_alman ANAT
         To align the EPI to the anatomy, first get the inverse of 
         anat2epi.aff12.1D with:
            cat_matvec -ONELINE anat2epi.aff12.1D -I > epi2anat.aff12.1D
         then use 3dAllineate:
         3dAllineate -cubic -1Dmatrix_apply epi2anat.aff12.1D  \
                     -prefix EPI_alman EPI
        Also, since the input volumes are preprocessed before using 3dAllineate,
        the script outputs copies of the preprocessed volumes as they were used
        in 3dAllineate.
         _lpc.EPI : EPI volume for 3dAllineate's -base
         _lpc.ANAT: ANAT volume for 3dAllineate's -input
         _lpc.wt.EPI: weight volume for 3dAllineate's -weight
               
    The goodness of the alignment should always be assessed. At the face of it,
    most of 3dAllineate's cost functions, and those of registration programs
    from other packages, will produce a plausible alignment but it may not be
    the best. You need to examine the results carefully if alignment quality is
    crucial for your analysis.
    In the absence of a gold standard, and given the low contrast of EPI data,
    it is difficult to judge alignment quality by just looking at the two
    volumes. This is the case, even when you toggle quickly between one volume
    and the next; turning overlay off and using 'u' key in the slice window.
    To aid with the assessment of alignment, you can use the script
    @AddEdge. For each pair of volumes (E, A), @AddEdge finds the edges eE in E, 
    and eA in A, and creates a new version of E with both sets of edges. The 
    edges eE are given a low value, edges eA a higher value and the highest 
    value at voxels where eE and eA overlap. Although not all edges are 
    relevant, one can from experience focus on edges that are relevant. 
    Here is a simple example, where one can judge the improvement of alignment.
    Say we have anat+orig, epi+orig and we ran: 
      lpc_align.py -anat anat+orig -epi epi+orig 
    The relevant output of lpc_align.py is 
      anat_alepi+orig, _lpc.anat+orig, _lpc.epi+orig :
    To judge the improvement of alignment, we run:
      @AddEdge _lpc.epi+orig _lpc.anat+orig anat_alepi+orig 
      where the first option is the epi as passed to 3dAllineate. I recommend
      you use _lpc.epi+orig and _lpc.anat+orig rather than epi+orig and
      anat+orig, because edge enhancement is much better without skulls. 
      Anatomical volumes of various alignments with the EPI can be listed next.
      Here we are only examing pre- and post-lpc_align.py alignment.
    @AddEdge will create new, edge enhanced volumes with names starting by _ae. A
    new pair of volumes is created for each pair at input. Once done, @AddEdge
    proposes you run the following commands:
      afni -niml -yesplugouts &
      @AddEdge 
    With no options, @AddEdge will now cycle trough the pairs of (E,A)
    displaying an edge enhanced A in the backgroung and E in the foregound
    (colored). Assuming your colorscale is 'Spectrum:red_to_blue', the edges
    from the EPI are blue, edges from anatomical are orange and overlapping
    edges red. The script will open two slice viewers, navigate around to see
    how the contours match up. Remember that edges will not correspond perfectly
    and everywhere. Edges sometimes model different structures in different
    volumes. After all, if edges matched that well, we'd use them in
    registration! Look at the volumes closely and in different modes to
    appreciate what is being displayed. Although AFNI is being driven by the
    script, it is still fully interactive.
    @AddEdge then awaits user input at the shell to show the next pair of
    volumes. All you need is to hit enter, or enter the number of the pair 
    you want to examine next. A .jpg of the images is saved as you switch from
    one pair to the next. Cycling between one pair and the next, helps you
    appreciate which alignment is best.
    
    This script is still in very beta mode so please don't disseminate it to
    younguns. DO send us all the feedback you have and of course, let us know if
    it fails. We'll probably ask that you send us some data to look into it
    ourselves.
    
    Our abstract describing the alignment tools is available here:
      http://afni.nimh.nih.gov/sscc/rwcox/abstracts/file.2008-02-21.4176173435   
    
    ===========================================================================      

A full list of options for lpc_align.py:

   -epi                
   -anat               
   -mask               
      default:            vent
   -keep_rm_files      
   -prep_only          
   -help               
   -verb               
   -align_centers      
      allowed:            yes, no
      default:            no
   -strip_anat_skull   
      allowed:            yes, no
      default:            yes
   -epi_strip          
      allowed:            3dSkullStrip, 3dAutomask, None
      default:            3dSkullStrip
   -pow_mask           
      default:            1.0
   -bin_mask           
      allowed:            yes, no
      default:            no
   -box_mask           
      allowed:            yes, no
      default:            no
   -ex_mode            
      use:                Command execution mode.
                          quiet: execute commands quietly
                          echo: echo commands executed
                          dry_run: only echo commands
                          
      allowed:            quiet, echo, dry_run
      default:            echo
   -big_move           
   -partial_coverage   
   -Allineate_opts     
      use:                Options passed to 3dAllineate.
      default:            -lpc -weight_frac 1.0 -VERB -warp aff -maxrot 6 -maxshf 10 -source_automask+4 
   -perc               
      default:            50
   -fresh              
   -suffix             
      default:            _alepi






AFNI program: make_random_timing.py

===========================================================================
Create random stimulus timing files.

    The object is to create a set of random stimulus timing files, suitable
    for use in 3dDeconvolve.  These times will not be TR-locked (unless the
    user requests it).  Stimulus presentation times will never overlap, though
    their responses can.

    This can easily be used to generate many sets of random timing files to
    test via "3dDeconvolve -nodata", in order to determine good timing, akin
    to what is done in HowTo #3 using RSFgen.  Note that the -save_3dd_cmd
    can be used to create a sample "3dDeconvolve -nodata" script.

    given:
        num_stim        - number of stimulus classes
        num_runs        - number of runs
        num_reps        - number of repetitions for each class (same each run)
        stim_dur        - length of time for each stimulus, in seconds
        run_time        - total amount of time, per run
        pre_stim_rest   - time before any first stimulus (same each run)
        post_stim_rest  - time after last stimulus (same each run)

    This program will create one timing file per stimulus class, num_runs lines
    long, with num_stim stimulus times per line.

    Time for rest will be run_time minus all stimulus time, and can be broken
    into pre_stim_rest, post_stim_rest and randomly distributed rest.  Consider
    the sum, assuming num_reps and stim_dur are constant (per run and stimulus
    class).

          num_stim * num_reps * stim_dur  (total stimulus duration for one run)
        + randomly distributed rest       (surrounding stimuli)
        + pre_stim_rest
        + post_stim_rest                  (note: account for response time)
        -----------
        = run_time

    Other controlling inputs include:

        across_runs - distribute num_reps across all runs, not per run
        min_rest    - time of rest to immediately follow each stimulus
                      (this is internally added to stim_dur)
        seed        - optional random number seed
        t_gran      - granularity of time, in seconds (default 0.1 seconds)
        tr_locked   - make all timing locked with the accompanying TR

    The internal method used is similar to that of RSFgen.  For a given run, a
    list of num_reps stimulus intervals for each stimulus class is generated
    (each interval is stim_dur seconds).  Appended to this is a list of rest
    intervals (each of length t_gran seconds).  This accounts for all time
    except for pre_stim_rest and post_stim_rest.

    This list (of numbers 0..num_stim, where 0 means rest) is then randomized.
    Timing comes from the result.

    Reading the list (still for a single run), times are accumulated, starting
    with pre_stim_rest seconds.  As the list is read, a 0 means add t_gran
    seconds to the current time.  A non-zero value means the given stimulus
    type occurred, so the current time goes into that stimulus file and the
    time is incremented by stim_dur seconds.

  * Note that stimulus times will never overlap, though response times can.

  * The following options can be specified as one value or as a list:

        -run_time       : time for each run, or a list of run times
        -stim_dur       : duration of all stimuli, or a list of every duration
        -num_reps       : nreps for all stimuli, or a list of nreps for each

    Note that varying these parameters can lead to unbalanced designs.  Use
    the list forms with caution.

    Currently, -pre_stim_rest and -post_stim_rest cannot vary over runs.

----------------------------------------
getting TR-locked timing

    If TR-locked timing is desired, it can be enforced with the -tr_locked
    option, along with which the user must specify "-tr TR".  The effect is
    to force stim_dur and t_gran to be equal to (or a multiple of) the TR.

    It is illegal to use both -tr_locked and -t_gran (since -tr is used to
    set t_gran).

----------------------------------------
distributing stimuli across all runs at once (via -across_runs)

    The main described use is where there is a fixed number of stimlus events 
    in each run, and of each type.  The -num_reps option specifies that number
    (or those numbers).  For example, if -num_reps is 8 and -num_runs is 4,
    each stimulus class would have 8 repetitions in each of the 4 runs (for a
    total of 32 repetitions).

    That changes if -across_runs is applied.

    With the addition of the -across_runs option, the meaning of -num_reps
    changes to be the total number of repetitions for each class across all
    runs, and the randomization changes to occur across all runs.  So in the
    above example, with -num_reps equal to 8, 8 stimluli (of each class) will
    be distributed across 4 runs.  The average number of repetitions per run
    would be 2.

    In such a case, note that it would be possible for some runs not to have
    any stimuli of a certain type.

----------------------------------------------------------------------
examples:

    1. Create a timing file for a single stimulus class for a single run.
       The run will be 100 seconds long, with (at least) 10 seconds before
       the first stimulus.  The stimulus will occur 20 times, and each lasts
       1.5 seconds.

       The output will be written to 'stimesA_01.1D'.

            make_random_timing.py -num_stim 1 -num_runs 1 -run_time 100  \
                -stim_dur 1.5 -num_reps 20 -pre_stim_rest 10 -prefix stimesA

    2. A typical example.

       Make timing files for 3 stim classes over 4 runs of 200 seconds.  Every
       stimulus class will have 8 events per run, each lasting 3.5 seconds.
       Require 20 seconds of rest before the first stimulus in each run, as
       well as after the last.

       Also, add labels for the 3 stimulus classes: houses, faces, donuts.
       They will be appended to the respective filenames.  And finally, display
       timing statistics for the user.

       The output will be written to stimesB_01.houses.1D, etc.

            make_random_timing.py -num_stim 3 -num_runs 4 -run_time 200  \
                -stim_dur 3.5 -num_reps 8 -prefix stimesB                \
                -pre_stim_rest 20 -post_stim_rest 20                     \
                -stim_labels houses faces donuts                         \
                -show_timing_stats

       Consider adding the -save_3dd_cmd option.

    3. Distribute stimuli over all runs at once.

       Similar to #2, but distribute the 8 events per class over all 4 runs.
       In #2, each stim class has 8 events per run (so 24 total events).
       Here each stim class has a total of 8 events.  Just add -across_runs.

            make_random_timing.py -num_stim 3 -num_runs 4 -run_time 200  \
                -stim_dur 3.5 -num_reps 8 -prefix stimesC                \
                -pre_stim_rest 20 -post_stim_rest 20                     \
                -across_runs -stim_labels houses faces donuts

    4. TR-locked example.

       Similar to #2, but make the stimuli TR-locked.  Set the TR to 2.0
       seconds, along with the length of each stimulus event.  This adds
       options -tr_locked and -tr, and requires -stim_dur to be a multiple
       (or equal to) the TR.

            make_random_timing.py -num_stim 3 -num_runs 4 -run_time 200  \
                -stim_dur 2.0 -num_reps 8 -prefix stimesD                \
                -pre_stim_rest 20 -post_stim_rest 20 -tr_locked -tr 2.0

    5. Esoteric example.

       Similar to #2, but require an additional 0.7 seconds of rest after
       each stimulus (exactly the same as adding 0.7 to the stim_dur), set
       the granularity of random sequencing to 0.001 seconds, apply a random
       number seed of 31415, and set the verbose level to 2.

       Save a 3dDeconvolve -nodata command in @cmd.3dd .
       
            make_random_timing.py -num_stim 3 -num_runs 4 -run_time 200  \
                -stim_dur 3.5 -num_reps 8 -prefix stimesE                \
                -pre_stim_rest 20 -post_stim_rest 20                     \
                -min_rest 0.7 -t_gran 0.001 -seed 31415 -verb 2          \
                -show_timing_stats -save_3dd_cmd @cmd.3dd

    6. Example with varying number of events, durations and run times.

    ** Note that this does not make for a balanced design.

       Similar to #2, but require each stimulus class to have a different
       number of events.  Class #1 will have 8 reps per run, class #2 will
       have 10 reps per run and class #3 will have 15 reps per run.  The
       -num_reps option takes either 1 or -num_stim parameters.  Here, 3
       are supplied.

            make_random_timing.py -num_stim 3 -num_runs 4       \
                -run_time 200 190 185 225                       \
                -stim_dur 3.5 4.5 3 -num_reps 8 10 15           \
                -pre_stim_rest 20 -post_stim_rest 20            \
                -prefix stimesF

    7. Catch trials.

       If every time a main stimulus 'M' is presented it must follow another
       stimulus 'C', catch trials can be used to separate them.  If the TRs
       look like ...CM.CM.....CM...CMCM, it is hard to separate the response
       to M from the response to C.  When separate C stimuli are also given,
       the problem becomes simple : C..CM.CM...C.CM...CMCM.  Now C and M can
       be measured separately.

       In this example we have 4 8-second main classes (A1, A2, B1, B2) that
       always follow 2 types of 8-second catch classes (A and B).  The times
       of A1 are always 8 seconds after the times for A, for example.

       Main stimuli are presented 5 times per run, and catch trials are given
       separately an additional 4 times per run.  That means, for example, that
       stimulus A will occur 14 times per run (4 as 'catch', 5 preceeding A1,
       5 preceeding A2).  Each of 3 runs will last 9 minutes.

       Initially we will claim that A1..B2 each lasts 16 seconds.  Then each of
       those events will be broken into a 'catch' event at the beginning, 
       followed by a 'main' event after another 8 seconds.  Set the minumum
       time between any 2 events to be 1.5 seconds.

       Do this in 4 steps:

          a. Generate stimulus timing for 6 classes: A, B, A1, A2, B1, B2.
             Stim lengths will be 8, 8, and 16, 16, 16, 16 seconds, at first.
             Note that both the stimulus durations and frequencies will vary.

               make_random_timing.py -num_stim 6 -num_runs 3 -run_time 540  \
                   -stim_dur 8 8 16 16 16 16 -num_reps 4 4 5 5 5 5          \
                   -stim_labels A B A1 A2 B1 B2 -min_rest 1.5 -seed 54321   \
                   -prefix stimesG 

          b. Separate 'catch' trials from main events.  Catch trails for A will
             occur at the exact stim times of A1 and A2.  Therefore all of our
             time for A/A1/A2 are actually times for A (and similarly for B).
             Concatenate the timing files and save them.

                1dcat stimesG_??_A.1D stimesG_??_A?.1D > stimesG_A_all.1D
                1dcat stimesG_??_B.1D stimesG_??_B?.1D > stimesG_B_all.1D

             Perhaps consider sorting the stimulus times per run, since the
             1dcat command does not do that.  Use timing_tool.py.  The new
             'sorted' timing files would replace the 'all' timing files.

                timing_tool.py -timing stimesG_A_all.1D -sort  \
                               -write_timing stimesG_A_sorted.1D
                timing_tool.py -timing stimesG_B_all.1D -sort  \
                               -write_timing stimesG_B_sorted.1D

          c. To get stim times for the 'main' regressors we need to add 8
             seconds to every time.  Otherwise, the times will be identical to
             those in stimesG.a_03_A?.1D (and B).

             There are many ways to add 8 to the timing files.  In this case,
             just run the program again, with the same seed, but add an offset
             of 8 seconds to all times.  Then simply ignore the new files for
             A and B, while keeping those of A1, A2, B1 and B2.

             Also, save the 3dDeconvolve command to run with -nodata.

               make_random_timing.py -num_stim 6 -num_runs 3 -run_time 540  \
                   -stim_dur 8 8 16 16 16 16 -num_reps 4 4 5 5 5 5          \
                   -stim_labels A B A1 A2 B1 B2 -min_rest 1.5 -seed 54321   \
                   -offset 8.0 -save_3dd_cmd @cmd.3dd.G -prefix stimesG 

          d. Finally, fix the 3dDeconvolve command in @cmd.3dd.G.

             1. Use timing files stimesG_A_sorted.1D and stimesG_B_sorted.1D
                from step b, replacing stimesG_01_A.1D and stimesG_01_B.1D.

             2. Update the stimulus durations of A1, A2, B1 and B2 from 16
                seconds to the correct 8 seconds (the second half of the 16
                second intervals).

             This is necessary because the command in step (c) does not know
             about the updated A/B files from step (b).  The first half of each
             16 second A1/A2 stimulus is actually stimulus A, while the second
             half is really A1 or A2.  Similarly for B.
             
        
       The resulting files are kept (and applied in and 3dDeconvolve commands):

            stimesG_[AB]_sorted.1D : the (sorted) 'catch' regressors,
                                     14 stimuli per run (from step b)
            stimesG_*_[AB][12].1D  : the 4 main regressors (at 8 sec offsets)
                                     (from step c)

       --- end of (long) example #7 ---

----------------------------------------------------------------------
informational arguments:

    -help                       : display this help
    -hist                       : display the modification history
    -show_valid_opts            : display all valid options (short format)
    -ver                        : display the version number

----------------------------------------
required arguments:

    -num_runs  NRUNS            : set the number of runs

        e.g. -num_runs 4

        Use this option to specify the total number of runs.  Output timing
        files will have one row per run (for -local_times in 3dDeconvolve).

    -run_time  TIME             : set the total time, per run (in seconds)

        e.g. -run_time 180
        e.g. -run_time 180 150 150 180

        This option specifies the total amount of time per run, in seconds.
        This time includes all rest and stimulation.  This time is per run,
        even if -across_runs is used.

    -num_stim  NSTIM            : set the number of stimulus classes

        e.g. -num_stim 3

        This specifies the number of stimulus classes.  The program will
        create one output file per stimulus class.

    -num_reps  REPS             : set the number of repetitions (per class?)

        e.g. -num_reps 8
        e.g. -num_reps 8 15 6

        This specifies the number of repetitions of each stimulus type, per run
        (unless -across_runs is used).  If one parameter is provided, every
        stimulus class will be given that number of repetitions per run (unless
        -across_runs is given, in which case each stimulus class will be given
        a total of that number of repetitions, across all runs).

        The user can also specify the number of repetitions for each of the
        stimulus classes separatly, as a list.

            see also: -across_runs

    -prefix    PREFIX           : set the prefix for output filenames

        e.g. -prefix stim_times

                --> might create: stim_times_001.1D

        The option specifies the prefix for all output stimulus timing files.
        The files will have the form: PREFIX_INDEX[_LABEL].1D, where PREFIX
        is via this option, INDEX is 01, 02, ... through the number of stim
        classes, and LABEL is optionally provided via -stim_labels.

        Therefore, output files will be sorted alphabetically, regardless of
        any labels, in the order that they are given to this program.

            see also -stim_labels

    -show_timing_stats          : show statistics from the timing

        e.g. -show_timing_stats

        If this option is set, the program will output statistical information
        regarding the stimulus timing, and on ISIs (inter-stimulus intervals)
        in particular.  One might want to be able to state what the min, mean,
        max and stdev of the ISI are.

    -stim_dur TIME              : set the duration for a single stimulus

        e.g. -stim_dur 3.5
        e.g. -stim_dur 3.5 1.0 4.2

        This specifies the length of time taken for a single stimulus, in
        seconds.  These stimulation intervals never overlap (with either rest
        or other stimulus intervals) in the output timing files.

        If a single TIME parameter is given, it applies to all of the stimulus
        classes.  Otherwise, the user can provide a list of durations, one per
        stimulus class.

----------------------------------------
optional arguments:

    -across_runs                : distribute stimuli across all runs at once

        e.g. -across_runs

        By default, each of -num_stim stimuli are randomly distributed within
        each run separately, per class.  But with the -across_runs option,
        these stimuli are distributed across all runs at once (so the number
        of repetitions per run will vary).

        For example, using -num_stim 2, -num_reps 24 and -num_runs 3, assuming
        -across_runs is _not_used, there would be 24 repetitions of each stim
        class per run (for a total of 72 repetitions over 3 runs).  However, if
        -across_runs is applied, then there will be only the 24 repetitions
        over 3 runs, for an average of 8 per run (though there will probably
        not be exactly 8 in every run).

    -min_rest REST_TIME         : specify extra rest after each stimulus

        e.g. -min_rest 0.320

                --> would add 320 milliseconds of rest after each stimulus

        There is no difference between applying this option and instead
        adding the REST_TIME to that of each regressor.  It is merely another
        way to partition the stimulus time period.

        For example, if each stimulus lasts 1.5 seconds, but it is required
        that at least 0.5 seconds separates each stimulus pair, then there
        are 2 equivalent ways to express this:

            A: -stim_dur 2.0
            B: -stim_dur 1.5 -min_rest 0.5

        These have the same effect, but perhaps the user wants to keep the
        terms logically separate.

        However the program simply adds min_rest to each stimulus length.

    -offset OFFSET              : specify an offset to add to every stim time

        e.g. -offset 4.5

        Use this option to offset every stimulus time by OFFSET seconds.

    -pre_stim_rest REST_TIME    : specify minimum rest period to start each run

        e.g. -pre_stim_rest 20

        Use this option to specify the amount of time that should pass at
        the beginning of each run before the first stimulus might occur.
        The random placing of stimuli and rest will occur after this time in
        each run.

        As usual, the time is in seconds.

    -post_stim_rest REST_TIME   : specify minimum rest period to end each run

        e.g. -post_stim_rest 20

        Use this option to specify the amount of time that should pass at
        the end of each run after the last stimulus might occur.

        One could consider using -post_stim_rest of 12.0, always, to account
        for the decay of the BOLD response after the last stimulus period ends.

        Note that the program does just prevent a stimulus from starting after
        this time, but the entire stimulation period (described by -stim_dur)
        will end before this post_stim_rest period begins.

        For example, if the user provides "-run_time 100", "-stim_dur 2.5"
        and "-post_stim_rest 15", then the latest a stimulus could possibly
        occur at is 82.5 seconds into a run.  This would allow 2.5 seconds for
        the stimulus, plus another 15 seconds for the post_stim_rest period.

    -save_3dd_cmd FILENAME      : save a 3dDeconvolve -nodata example

        e.g. -save_3dd_cmd sample.3dd.command

        Use this option to save an example of running "3dDeconvolve -nodata"
        with the newly created stim_times files.  The saved script includes
        creation of a SUM regressor (if more than one stimulus was given) and
        a suggestion of how to run 1dplot to view the regressors created from
        the timing files.

        The use of the SUM regressor is to get a feel for what the expected
        response might look at a voxel that response to all stimulus classes.
        If, for example, the SUM never goes to zero in the middle of a run,
        one might wonder whether it is possible to accurately separate each
        stimulus response from the baseline.

    -seed SEED                  : specify a seed for random number generation

        e.g. -seed 3141592

        This option allows the user to specify a seed for random number
        generation in the program.  The main reason to do so is to be able
        to duplicate results.

        By default, the seed is based on the current system time.

    -stim_labels LAB1 LAB2 ...  : specify labels for the stimulus classes

        e.g. -stim_labels houses faces donuts

        Via this option, one can specify labels to become part of the output
        filenames.  If the above example were used, along with -prefix stim,
        the first stimulus timing would be written to stim_01_houses.1D.

        The stimulus index (1-based) is always part of the filename, as that
        keeps the files alphabetical in the order that the stimuli were
        specified to the program.

        There must be exactly -num_stim labels provided.

    -t_digits DIGITS            : set the number of decimal places for times

        e.g. -t_digits 3

        Via this option one can control the number of places after the
        decimal that are used when writing the stimulus times to each output
        file.  

        The default is 1, printing times in tenths of a second.  But if a
        higher time granularity is requested via -t_gran, one might want
        more places after the decimal.

        Note that if a user-supplied -t_gran does not round to a tenth of a
        second, the default t_digits changes to 3, to be in milliseconds.

    -t_gran GRANULARITY         : set the time granularity

        e.g. -t_gran 0.001

        The default time granularity is 0.1 seconds, and rest timing is
        computed at that resolution.  This option can be applied to change
        the resolution.  There are good reasons to go either up or down.

        One might want to use 0.001 to obtain a temporal granularity of a
        millisecond, as times are often given at that resolution.

        Also, one might want to use the actual TR, such as 2.5 seconds, to
        ensure that rest and stimuli occur on the TR grid.  Note that such a
        use also requires -stim_dur to be a multiple of the TR.

    -tr TR                      : set the scanner TR

        e.g. -tr 2.5

        The TR is needed for the -tr_locked option (so that all times are
        multiples of the TR), and for the -save_3dd_cmd option (the TR must
        be given to 3dDeconvolve).

        see also: -save_3dd_cmd, -tr_locked

    -verb LEVEL                 : set the verbose level

        e.g. -verb 2

        The default level is 1, and 0 is consider 'quiet' mode, only reporting
        errors.  The maximum level is currently 4.


- R Reynolds  May 7, 2008               motivated by Ikuko Mukai
===========================================================================




AFNI program: make_stim_times.py

===========================================================================
Convert a set of 0/1 stim files into a set of stim_times files, or
convert real-valued files into those for use with -stim_times_AM2.

Each input stim file can have a set of columns of stim classes,
     and multiple input files can be used.  Each column of an
     input file is expected to have one row per TR, and a total
     of num_TRs * num_runs rows.

     The user must provide -files, -prefix, -nruns, -nt and -tr,
     where NT * NRUNS should equal (or be less than) the number
     of TR lines in each file.

Note: Since the output times are LOCAL (one row per run) in the
     eyes of 3dDeconvolve, any file where the first stimulus is
     the only stimulus in that run will have '*' appended to that
     line, so 3dDeconvolve would treat it as a multi-run file.

Sample stim_file with 3 stim classes over 7 TRs:

        0       0       0
        1       0       0
        0       1       0
        0       1       0
        1       0       0
        0       0       0
        0       0       1

Corresponding stim_times files, assume TR = 2.5 seconds:

        stim.01.1D:     2.5 10
        stim.02.1D:     5    7.5
        stim.03.1D:     15

---------------------------------------------------------------------------

Options: -files file1.1D file2.1D ...   : specify stim files
         -prefix PREFIX                 : output prefix for files
         -nruns  NRUNS                  : number of runs
         -nt     NT                     : number of TRs per run
         -tr     TR                     : TR time, in seconds
         -offset OFFSET                 : add OFFSET to all output times
         -labels LAB1 LAB2 ...          : provide labels for filenames
         -show_valid_opts               : output all options
         -verb   LEVEL                  : provide verbose output

complex options:
         -amplitudes                    : "marry" times with amplitudes

                This is to make files for -stim_times_AM1 or -stim_times_AM2
                in 3dDeconvolve (for 2-parameter amplitude modulation).

                With this option, the output files do not just contain times,
                they contain values in the format 'time*amplitude', where the
                amplitude is the non-zero value in the input file.

                For example, the input might look like:

                   0
                   2.4
                   0
                   0
                   -1.2

                On a TR=2.5 grid, this would (skip zeros as usual and) output:

                   2.5*2.4 10*-1.2

---------------------------------------------------------------------------

examples:

    1. Given 3 stimulus classes, A, B and C, each with a single column
       file spanning 7 runs (with some number of TRs per run), create
       3 stim_times files (stimes.01.1D, stimes.02.1D, stimes.02.1D)
       having the times, in seconds, of the stimuli, one run per row.

            make_stim_times.py -files stimA.1D stimB.1D stimC.1D   \
                               -prefix stimes1 -tr 2.5 -nruns 7 -nt 100

    2. Same as 1, but suppose stim_all.1D has all 3 stim types (so 3 columns).

            make_stim_times.py -files stim_all.1D -prefix stimes2 -tr 2.5 \
                               -nruns 7 -nt 100

    3. Same as 2, but the stimuli were presented at the middle of the TR, so
       add 1.25 seconds to each stimulus time.

            make_stim_times.py -files stim_all.1D -prefix stimes3 -tr 2.5 \
                               -nruns 7 -nt 100 -offset 1.25

    4. An appropriate conversion of stim_files to stim_times for the 
       example in AFNI_data2 (HowTo #5).  The labels will appear in the
       resulting filenames.

            make_stim_times.py -prefix stim_times -tr 1.0 -nruns 10 -nt 272 \
                           -files misc_files/all_stims.1D                   \
                           -labels ToolMovie HumanMovie ToolPoint HumanPoint

    5. Generate files for 2-term amplitude modulation in 3dDeconvolve (i.e.
       for use with -stim_times_AM2).  For any TR that has a non-zero value
       in the input, the output will have that current time along with the
       non-zero amplitude value in the format time:value.

       Just add -amplitudes to any existing command.

            make_stim_times.py -files stim_weights.1D -prefix stimes5 -tr 2.5 \
                               -nruns 7 -nt 100 -amplitudes

- R Reynolds, Nov 17, 2006
===========================================================================




AFNI program: mayo_analyze
Usage: mayo_analyze file.hdr ...
Prints out info from the Mayo Analyze 7.5 header file(s)



AFNI program: module_test_lib.py


AFNI program: mpegtoppm
Usage:  mpegtoppm [-prefix ppp] file.mpg
Writes files named 'ppp'000001.ppm, etc.



AFNI program: mritopgm
Converts an image to raw pgm format.
Results go to stdout and should be redirected.
Usage:   mritopgm [-pp] input_image
Example: mritopgm fred.001 | ppmtogif > fred.001.gif

  The '-pp' option expresses a clipping percentage.
  That is, if this option is given, the pp%-brightest
  pixel is mapped to white; all above it are also white,
  and all below are mapped linearly down to black.
  The default is that pp=100; that is, the brightest
  pixel is white.  A useful operation for many MR images is
    mritopgm -99 fred.001 | ppmtogif > fred.001.gif
  This will clip off the top 1% of voxels, which are often
  super-bright due to arterial inflow effects, etc.



AFNI program: neuro_deconvolve.py

===========================================================================
neuro_deconvolve.py:

Generate a script that would apply 3dTfitter to deconvolve an MRI signal
(BOLD response curve) into a neuro response curve.

Required parameters include an input dataset, a script name and an output
prefix.

----------------------------------------------------------------------
examples:

    1. 3d+time example

        neuro_deconvolve.py                     \
                -input run1+orig                \
                -script script.neuro            \
                -mask_dset automask+orig        \
                -prefix neuro_resp

    2. 1D example

        neuro_deconvolve.py             \
                -input epi_data.1D      \
                -tr 2.0                 \
                -script script.1d       \
                -prefix neuro.1D


----------------------------------------------------------------------
informational arguments:

    -help                       : display this help
    -hist                       : display the modification history
    -show_valid_opts            : display all valid options (short format)
    -ver                        : display the version number

----------------------------------------
required arguments:

    -input INPUT_DATASET        : set the data to deconvolve

        e.g. -input epi_data.1D

    -prefix PREFIX              : set the prefix for output filenames

        e.g. -prefix neuro_resp

                --> might create: neuro_resp+orig.HEAD/.BRIK

    -script SCRIPT              : specify the name of the output script

        e.g. -script neuro.script

----------------------------------------
optional arguments:


    -kernel KERNEL              : set the response kernel

        default: -kernel GAM

    -kernel_file FILENAME       : set the filename to store the kernel in

        default: -kernel_file resp_kernel.1D

    -mask_dset DSET             : set a mask dataset for 3dTfitter to use

        e.g. -mask_dset automask+orig

    -tr TR                      : set the scanner TR

        e.g. -tr 2.5

        The TR is needed for 1D formatted input files.  It is not needed
        for AFNI 3d+time datasets, since the TR is in the file.

    -verb LEVEL                 : set the verbose level

        e.g. -verb 2


- R Reynolds  June 12, 2008
===========================================================================




AFNI program: nifti1_test
Usage: nifti1_test [-n2|-n1|-na|-a2] infile [prefix]

 If prefix is given, then the options mean:
  -a2 ==> write an ANALYZE 7.5 file pair: prefix.hdr/prefix.img
  -n2 ==> write a NIFTI-1 file pair: prefix.hdr/prefix.img
  -n1 ==> write a NIFTI-1 single file: prefix.nii
  -na ==> write a NIFTI-1 ASCII+binary file: prefix.nia
  -za2 => write an ANALYZE 7.5 file pair:
          prefix.hdr.gz/prefix.img.gz
  -zn2 => write a NIFTI-1 file pair: prefix.hdr.gz/prefix.img.gz
  -zn1 => write a NIFTI-1 single file: prefix.nii.gz
 The default is '-n1'.

 If prefix is not given, then the header info from infile
 file is printed to stdout.

 Please note that the '.nia' format is NOT part of the
 NIFTI-1 specification, but is provided mostly for ease
 of visualization (e.g., you can edit a .nia file and
 change some header fields, then rewrite it as .nii)

sizeof(nifti_1_header)=348



AFNI program: nifti_stats

Demo program for computing NIfTI statistical functions.
Usage: nifti_stats [-q|-d|-1|-z] val CODE [p1 p2 p3]
 val can be a single number or in the form bot:top:step.
 default ==> output p = Prob(statistic < val).
  -q     ==> output is 1-p.
  -d     ==> output is density.
  -1     ==> output is x such that Prob(statistic < x) = val.
  -z     ==> output is z such that Normal cdf(z) = p(val).
  -h     ==> output is z such that 1/2-Normal cdf(z) = p(val).
 Allowable CODEs:
  CORREL      TTEST       FTEST       ZSCORE      CHISQ       BETA      
  BINOM       GAMMA       POISSON     NORMAL      FTEST_NONC  CHISQ_NONC
  LOGISTIC    LAPLACE     UNIFORM     TTEST_NONC  WEIBULL     CHI       
  INVGAUSS    EXTVAL      PVAL        LOGPVAL     LOG10PVAL 
 Following CODE are distributional parameters, as needed.

Results are written to stdout, 1 number per output line.
Example (piping output into AFNI program 1dplot):
 nifti_stats -d 0:4:.001 INVGAUSS 1 3 | 1dplot -dx 0.001 -stdin

Author - RW Cox - SSCC/NIMH/NIH/DHHS/USA/EARTH - March 2004




AFNI program: nifti_tool
nifti_tool

   - display, modify or compare nifti structures in datasets
   - copy a dataset by selecting a list of volumes from the original
   - copy a dataset, collapsing any dimensions, each to a single index
   - display a time series for a voxel, or more generally, the data
       from any collapsed image, in ASCII text

  This program can be used to display information from nifti datasets,
  to modify information in nifti datasets, to look for differences
  between two nifti datasets (like the UNIX 'diff' command), and to copy
  a dataset to a new one, either by restricting any dimensions, or by
  copying a list of volumes (the time dimension) from a dataset.

  Only one action type is allowed, e.g. one cannot modify a dataset
  and then take a 'diff'.

  one can display - any or all fields in the nifti_1_header structure
                  - any or all fields in the nifti_image structure
                  - any or all fields in the nifti_analyze75 structure
                  - the extensions in the nifti_image structure
                  - the time series from a 4-D dataset, given i,j,k
                  - the data from any collapsed image, given dims. list

  one can check   - perform internal check on the nifti_1_header struct
                    (by nifti_hdr_looks_good())
                  - perform internal check on the nifti_image struct
                    (by nifti_nim_is_valid())

  one can modify  - any or all fields in the nifti_1_header structure
                  - any or all fields in the nifti_image structure
                  - swap all fields in NIFTI or ANALYZE header structure
          add/rm  - any or all extensions in the nifti_image structure
          remove  - all extensions and descriptions from the datasets

  one can compare - any or all field pairs of nifti_1_header structures
                  - any or all field pairs of nifti_image structures

  one can copy    - an arbitrary list of dataset volumes (time points)
                  - a dataset, collapsing across arbitrary dimensions
                    (restricting those dimensions to the given indices)

  one can create  - a new dataset out of nothing

  Note: to learn about which fields exist in either of the structures,
        or to learn a field's type, size of each element, or the number
        of elements in the field, use either the '-help_hdr' option, or
        the '-help_nim' option.  No further options are required.
  ------------------------------

  usage styles:

    nifti_tool -help                 : show this help
    nifti_tool -help_hdr             : show nifti_1_header field info
    nifti_tool -help_nim             : show nifti_image field info
    nifti_tool -help_ana             : show nifti_analyze75 field info
    nifti_tool -help_datatypes       : show datatype table

    nifti_tool -ver                  : show the current version
    nifti_tool -hist                 : show the modification history
    nifti_tool -nifti_ver            : show the nifti library version
    nifti_tool -nifti_hist           : show the nifti library history
    nifti_tool -with_zlib            : was library compiled with zlib


    nifti_tool -check_hdr -infiles f1 ...
    nifti_tool -check_nim -infiles f1 ...

    nifti_tool -copy_brick_list -infiles f1'[indices...]'
    nifti_tool -copy_collapsed_image I J K T U V W -infiles f1
    nifti_tool -copy_im -infiles f1

    nifti_tool -make_im -prefix new_im.nii

    nifti_tool -disp_hdr [-field FIELDNAME] [...] -infiles f1 ...
    nifti_tool -disp_nim [-field FIELDNAME] [...] -infiles f1 ...
    nifti_tool -disp_ana [-field FIELDNAME] [...] -infiles f1 ...
    nifti_tool -disp_exts -infiles f1 ...
    nifti_tool -disp_ts I J K [-dci_lines] -infiles f1 ...
    nifti_tool -disp_ci I J K T U V W [-dci_lines] -infiles f1 ...

    nifti_tool -mod_hdr  [-mod_field FIELDNAME NEW_VAL] [...] -infiles f1
    nifti_tool -mod_nim  [-mod_field FIELDNAME NEW_VAL] [...] -infiles f1

    nifti_tool -swap_as_nifti   -overwrite -infiles f1
    nifti_tool -swap_as_analyze -overwrite -infiles f1
    nifti_tool -swap_as_old     -overwrite -infiles f1

    nifti_tool -add_afni_ext    'extension in quotes' [...] -infiles f1
    nifti_tool -add_comment_ext 'extension in quotes' [...] -infiles f1
    nifti_tool -add_comment_ext 'file:FILENAME' [...] -infiles f1
    nifti_tool -rm_ext INDEX [...] -infiles f1 ...
    nifti_tool -strip_extras -infiles f1 ...

    nifti_tool -diff_hdr [-field FIELDNAME] [...] -infiles f1 f2
    nifti_tool -diff_nim [-field FIELDNAME] [...] -infiles f1 f2

  ------------------------------

  selected examples:

    A. checks header (for problems):

      1. nifti_tool -check_hdr -infiles dset0.nii dset1.nii
      2. nifti_tool -check_hdr -infiles *.nii *.hdr
      3. nifti_tool -check_hdr -quiet -infiles *.nii *.hdr

    B. show header differences:

      1. nifti_tool -diff_hdr -field dim -field intent_code  \
                    -infiles dset0.nii dset1.nii 
      2. nifti_tool -diff_hdr -new_dims 3 10 20 30 0 0 0 0   \
                    -infiles my_dset.nii MAKE_IM 

    C. display structures or fields:

      1. nifti_tool -disp_hdr -infiles dset0.nii dset1.nii dset2.nii
      2. nifti_tool -disp_hdr -field dim -field descrip -infiles dset.nii
      3. nifti_tool -disp_exts -infiles dset0.nii dset1.nii dset2.nii
      4. nifti_tool -disp_ts 23 0 172 -infiles dset1_time.nii
      5. nifti_tool -disp_ci 23 0 172 -1 0 0 0 -infiles dset1_time.nii

      6. nifti_tool -disp_ana -infiles analyze.hdr
      7. nifti_tool -disp_nim -infiles nifti.nii

    D. create a new dataset from nothing:

      1. nifti_tool -make_im -prefix new_im.nii 
      2. nifti_tool -make_im -prefix float_im.nii \
                    -new_dims 3 10 20 30 0 0 0 0  -new_datatype 16
      3. nifti_tool -mod_hdr -mod_field descrip 'dataset with mods'  \
                    -new_dims 3 10 20 30 0 0 0 0                     \
                    -prefix new_desc.nii -infiles MAKE_IM

    E. copy dataset, brick list or collapsed image:

      1. nifti_tool -copy_im -prefix new.nii -infiles dset0.nii
      2. nifti_tool -cbl -prefix new_07.nii -infiles dset0.nii'[0,7]'
      3. nifti_tool -cbl -prefix new_partial.nii \
                    -infiles dset0.nii'[3..$(2)]'

      4. nifti_tool -cci 5 4 17 -1 -1 -1 -1 -prefix new_5_4_17.nii
      5. nifti_tool -cci 5 0 17 -1 -1 2 -1  -keep_hist \
                    -prefix new_5_0_17_2.nii

    F. modify the header (modify fields or swap entire header):

      1. nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                    -mod_field dim '4 64 64 20 30 1 1 1 1'
      2. nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                    -mod_field descrip 'beer, brats and cheese, mmmmm...'
      3. cp old_dset.hdr nifti_swap.hdr 
         nifti_tool -swap_as_nifti -overwrite -infiles nifti_swap.hdr
      4. cp old_dset.hdr analyze_swap.hdr 
         nifti_tool -swap_as_analyze -overwrite -infiles analyze_swap.hdr
      5. nifti_tool -swap_as_old -prefix old_swap.hdr -infiles old_dset.hdr
         nifti_tool -diff_hdr -infiles nifti_swap.hdr old_swap.hdr

    G. strip, add or remove extensions:
       (in example #3, the extension is copied from a text file)


      1. nifti_tool -strip -overwrite -infiles *.nii
      2. nifti_tool -add_comment 'converted from MY_AFNI_DSET+orig' \
                    -prefix dnew -infiles dset0.nii
      3. nifti_tool -add_comment 'file:my.extension.txt' \
                    -prefix dnew -infiles dset0.nii
      4. nifti_tool -rm_ext ALL -prefix dset1 -infiles dset0.nii
      5. nifti_tool -rm_ext 2 -rm_ext 3 -rm_ext 5 -overwrite \
                    -infiles dset0.nii

  ------------------------------

  options for check actions:

    -check_hdr         : check for a valid nifti_1_header struct

       This action is used to check the nifti_1_header structure for
       problems.  The nifti_hdr_looks_good() function is used for the
       test, and currently checks:
       
         dim[], sizeof_hdr, magic, datatype
       
       More tests can be requested of the author.

       e.g. perform checks on the headers of some datasets
       nifti_tool -check_hdr -infiles dset0.nii dset1.nii
       nifti_tool -check_hdr -infiles *.nii *.hdr
       
       e.g. add the -quiet option, so that only erros are reported
       nifti_tool -check_hdr -quiet -infiles *.nii *.hdr

    -check_nim         : check for a valid nifti_image struct

       This action is used to check the nifti_image structure for
       problems.  This is tested via both nifti_convert_nhdr2nim()
       and nifti_nim_is_valid(), though other functions are called
       below them, of course.  Current checks are:

         dim[], sizeof_hdr, datatype, fname, iname, nifti_type
       
       Note that creation of a nifti_image structure depends on good
       header fields.  So errors are terminal, meaning this check would
       probably report at most one error, even if more exist.  The
       -check_hdr action is more complete.

       More tests can be requested of the author.

             e.g. nifti_tool -check_nim -infiles dset0.nii dset1.nii
             e.g. nifti_tool -check_nim -infiles *.nii *.hdr

  ------------------------------

  options for create action:

    -make_im           : create a new dataset from nothing

       With this the user can create a new dataset of a basic style,
       which can then be modified with other options.  This will create
       zero-filled data of the appropriate size.
       
       The default is a 1x1x1 image of shorts.  These settings can be
       modified with the -new_dim option, to set the 8 dimension values,
       and the -new_datatype, to provide the integral type for the data.

       See -new_dim, -new_datatype and -infiles for more information.
       
       Note that any -infiles dataset of the name MAKE_IM will also be
       created on the fly.

    -new_dim D0 .. D7  : specify the dim array for the a new dataset.

         e.g. -new_dim 4 64 64 27 120 0 0 0

       This dimension list will apply to any dataset created via
       MAKE_IM or -make_im.  All 8 values are required.  Recall that
       D0 is the number of dimensions, and D1 through D7 are the sizes.
       
    -new_datatype TYPE : specify the dim array for the a new dataset.

         e.g. -new_datatype 16
         default: -new_datatype 4   (short)

       This dimension list will apply to any dataset created via
       MAKE_IM or -make_im.  TYPE should be one of the NIFTI_TYPE_*
       numbers, from nifti1.h.
       
  ------------------------------

  options for copy actions:

    -copy_brick_list   : copy a list of volumes to a new dataset
    -cbl               : (a shorter, alternative form)
    -copy_im           : (a shorter, alternative form)

       This action allows the user to copy a list of volumes (over time)
       from one dataset to another.  The listed volumes can be in any
       order and contain repeats, but are of course restricted to
       the set of values {1, 2, ..., nt-1}, from dimension 4.

       This option is a flag.  The index list is specified with the input
       dataset, contained in square brackets.  Note that square brackets
       are special to most UNIX shells, so they should be contained
       within single quotes.  Syntax of an index list:

       notes:

         - indices start at zero
         - indices end at nt-1, which has the special symbol '$'
         - single indices should be separated with commas, ','
             e.g. -infiles dset0.nii'[0,3,8,5,2,2,2]'
         - ranges may be specified using '..' or '-' 
             e.g. -infiles dset0.nii'[2..95]'
             e.g. -infiles dset0.nii'[2..$]'
         - ranges may have step values, specified in ()
           example: 2 through 95 with a step of 3, i.e. {2,5,8,11,...,95}
             e.g. -infiles dset0.nii'[2..95(3)]'

       This functionality applies only to 3 or 4-dimensional datasets.

       e.g. to copy a dataset:
       nifti_tool -copy_im -prefix new.nii -infiles dset0.nii

       e.g. to copy sub-bricks 0 and 7:
       nifti_tool -cbl -prefix new_07.nii -infiles dset0.nii'[0,7]'

       e.g. to copy an entire dataset:
       nifti_tool -cbl -prefix new_all.nii -infiles dset0.nii'[0..$]'

       e.g. to copy every other time point, skipping the first three:
       nifti_tool -cbl -prefix new_partial.nii \
                  -infiles dset0.nii'[3..$(2)]'


    -copy_collapsed_image ... : copy a list of volumes to a new dataset
    -cci I J K T U V W        : (a shorter, alternative form)

       This action allows the user to copy a collapsed dataset, where
       some dimensions are collapsed to a given index.  For instance, the
       X dimension could be collapsed to i=42, and the time dimensions
       could be collapsed to t=17.  To collapse a dimension, set Di to
       the desired index, where i is in {0..ni-1}.  Any dimension that
       should not be collapsed must be listed as -1.

       Any number (of valid) dimensions can be collapsed, even down to a
       a single value, by specifying enough valid indices.  The resulting
       dataset will then have a reduced number of non-trivial dimensions.

       Assume dset0.nii has nim->dim[8] = { 4, 64, 64, 21, 80, 1, 1, 1 }.
       Note that this is a 4-dimensional dataset.

         e.g. copy the time series for voxel i,j,k = 5,4,17
         nifti_tool -cci 5 4 17 -1 -1 -1 -1 -prefix new_5_4_17.nii

         e.g. read the single volume at time point 26
         nifti_tool -cci -1 -1 -1 26 -1 -1 -1 -prefix new_t26.nii

       Assume dset1.nii has nim->dim[8] = { 6, 64, 64, 21, 80, 4, 3, 1 }.
       Note that this is a 6-dimensional dataset.

         e.g. copy all time series for voxel i,j,k = 5,0,17, with v=2
              (and add the command to the history)
         nifti_tool -cci 5 0 17 -1 -1 2 -1  -keep_hist \
                    -prefix new_5_0_17_2.nii

         e.g. copy all data where i=3, j=19 and v=2
              (I do not claim to know a good reason to do this)
         nifti_tool -cci 3 19 -1 -1 -1 2 -1 -prefix new_mess.nii

       See '-disp_ci' for more information (which displays/prints the
       data, instead of copying it to a new dataset).

  ------------------------------

  options for display actions:

    -disp_hdr          : display nifti_1_header fields for datasets

       This flag means the user wishes to see some of the nifti_1_header
       fields in one or more nifti datasets. The user may want to specify
       mutliple '-field' options along with this.  This option requires
       one or more files input, via '-infiles'.

       If no '-field' option is present, all fields will be displayed.

       e.g. to display the contents of all fields:
       nifti_tool -disp_hdr -infiles dset0.nii
       nifti_tool -disp_hdr -infiles dset0.nii dset1.nii dset2.nii

       e.g. to display the contents of select fields:
       nifti_tool -disp_hdr -field dim -infiles dset0.nii
       nifti_tool -disp_hdr -field dim -field descrip -infiles dset0.nii

    -disp_nim          : display nifti_image fields for datasets

       This flag option works the same way as the '-disp_hdr' option,
       except that the fields in question are from the nifti_image
       structure.

    -disp_ana          : display nifti_analyze75 fields for datasets

       This flag option works the same way as the '-disp_hdr' option,
       except that the fields in question are from the nifti_analyze75
       structure.

    -disp_exts         : display all AFNI-type extensions

       This flag option is used to display all nifti_1_extension data,
       for only those extensions of type AFNI (code = 4).  The only
       other option used will be '-infiles'.

       e.g. to display the extensions in datasets:
       nifti_tool -disp_exts -infiles dset0.nii
       nifti_tool -disp_exts -infiles dset0.nii dset1.nii dset2.nii

    -disp_ts I J K    : display ASCII time series at i,j,k = I,J,K

       This option is used to display the time series data for the voxel
       at i,j,k indices I,J,K.  The data is displayed in text, either all
       on one line (the default), or as one number per line (via the
       '-dci_lines' option).

       Notes:

         o This function applies only to 4-dimensional datasets.
         o The '-quiet' option can be used to suppress the text header,
           leaving only the data.
         o This option is short for using '-disp_ci' (display collapsed
           image), restricted to 4-dimensional datasets.  i.e. :
               -disp_ci I J K -1 -1 -1 -1

       e.g. to display the time series at voxel 23, 0, 172:
       nifti_tool -disp_ts 23 0 172            -infiles dset1_time.nii
       nifti_tool -disp_ts 23 0 172 -dci_lines -infiles dset1_time.nii
       nifti_tool -disp_ts 23 0 172 -quiet     -infiles dset1_time.nii

    -disp_collapsed_image  : display ASCII values for collapsed dataset
    -disp_ci I J K T U V W : (a shorter, alternative form)

       This option is used to display all of the data from a collapsed
       image, given the dimension list.  The data is displayed in text,
       either all on one line (the default), or as one number per line
       (by using the '-dci_lines' flag).

       The '-quiet' option can be used to suppress the text header.

       e.g. to display the time series at voxel 23, 0, 172:
       nifti_tool -disp_ci 23 0 172 -1 0 0 0 -infiles dset1_time.nii

       e.g. to display z-slice 14, at time t=68:
       nifti_tool -disp_ci -1 -1 14 68 0 0 0 -infiles dset1_time.nii

       See '-ccd' for more information, which copies such data to a new
       dataset, instead of printing it to the terminal window.

  ------------------------------

  options for modification actions:

    -mod_hdr           : modify nifti_1_header fields for datasets

       This action is used to modify some of the nifti_1_header fields in
       one or more datasets.  The user must specify a list of fields to
       modify via one or more '-mod_field' options, which include field
       names, along with the new (set of) values.

       The user can modify a dataset in place, or use '-prefix' to
       produce a new dataset, to which the changes have been applied.
       It is recommended to normally use the '-prefix' option, so as not
       to ruin a dataset.

       Note that some fields have a length greater than 1, meaning that
       the field is an array of numbers, or a string of characters.  In
       order to modify an array of numbers, the user must provide the
       correct number of values, and contain those values in quotes, so
       that they are seen as a single option.

       To modify a string field, put the string in quotes.

       The '-mod_field' option takes a field_name and a list of values.

       e.g. to modify the contents of various fields:

       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field qoffset_x -17.325
       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field dim '4 64 64 20 30 1 1 1 1'
       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field descrip 'beer, brats and cheese, mmmmm...'

       e.g. to modify the contents of multiple fields:
       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field qoffset_x -17.325 -mod_field slice_start 1

       e.g. to modify the contents of multiple files (must overwrite):
       nifti_tool -mod_hdr -overwrite -mod_field qoffset_x -17.325   \
                  -infiles dset0.nii dset1.nii

    -mod_nim          : modify nifti_image fields for datasets

       This action option is used the same way that '-mod_hdr' is used,
       except that the fields in question are from the nifti_image
       structure.

    -strip_extras     : remove extensions and descriptions from datasets

       This action is used to attempt to 'clean' a dataset of general
       text, in order to make it more anonymous.  Extensions and the
       nifti_image descrip field are cleared by this action.

       e.g. to strip all *.nii datasets in this directory:
       nifti_tool -strip -overwrite -infiles *.nii

    -swap_as_nifti    : swap the header according to nifti_1_header

       Perhaps a NIfTI header is mal-formed, and the user explicitly
       wants to swap it before performing other operations.  This action
       will swap the field bytes under the assumption that the header is
       in the NIfTI format.

       ** The recommended course of action is to make a copy of the
          dataset and overwrite the header via -overwrite.  If the header
          needs such an operation, it is likely that the data would not
          otherwise be read in correctly.

    -swap_as_analyze  : swap the header according to nifti_analyze75

       Perhaps an ANALYZE header is mal-formed, and the user explicitly
       wants to swap it before performing other operations.  This action
       will swap the field bytes under the assumption that the header is
       in the ANALYZE 7.5 format.

       ** The recommended course of action is to make a copy of the
          dataset and overwrite the header via -overwrite.  If the header
          needs such an operation, it is likely that the data would not
          otherwise be read in correctly.

    -swap_as_old      : swap the header using the old method

       As of library version 1.35 (3 Aug, 2008), nifticlib now swaps all
       fields of a NIfTI dataset (including UNUSED ones), and it swaps
       ANALYZE datasets according to the nifti_analyze75 structure.
       This is a significant different in the case of ANALYZE datasets.

       The -swap_as_old option was added to compare the results of the
       swapping methods, or to undo one swapping method and replace it
       with another (such as to undo the old method and apply the new).

  ------------------------------

  options for adding/removing extensions:

    -add_afni_ext EXT : add an AFNI extension to the dataset

       This option is used to add AFNI-type extensions to one or more
       datasets.  This option may be used more than once to add more than
       one extension.

       If EXT is of the form 'file:FILENAME', then the extension will
       be read from the file, FILENAME.

       The '-prefix' option is recommended, to create a new dataset.
       In such a case, only a single file may be taken as input.  Using
       '-overwrite' allows the user to overwrite the current file, or
       to add the extension(s) to multiple files, overwriting them.

       e.g. to add a generic AFNI extension:
       nifti_tool -add_afni_ext 'wow, my first extension' -prefix dnew \
                  -infiles dset0.nii

       e.g. to add multiple AFNI extensions:
       nifti_tool -add_afni_ext 'wow, my first extension :)'      \
                  -add_afni_ext 'look, my second...'              \
                  -prefix dnew -infiles dset0.nii

       e.g. to add an extension, and overwrite the dataset:
       nifti_tool -add_afni_ext 'some AFNI extension' -overwrite \
                  -infiles dset0.nii dset1.nii 

    -add_comment_ext EXT : add a COMMENT extension to the dataset

       This option is used to add COMMENT-type extensions to one or more
       datasets.  This option may be used more than once to add more than
       one extension.  This option may also be used with '-add_afni_ext'.

       If EXT is of the form 'file:FILENAME', then the extension will
       be read from the file, FILENAME.

       The '-prefix' option is recommended, to create a new dataset.
       In such a case, only a single file may be taken as input.  Using
       '-overwrite' allows the user to overwrite the current file, or
       to add the extension(s) to multiple files, overwriting them.

       e.g. to add a comment about the dataset:
       nifti_tool -add_comment 'converted from MY_AFNI_DSET+orig' \
                  -prefix dnew                                    \
                  -infiles dset0.nii

       e.g. to add multiple extensions:
       nifti_tool -add_comment  'add a comment extension'         \
                  -add_afni_ext 'and an AFNI XML style extension' \
                  -add_comment  'dataset copied from dset0.nii'   \
                  -prefix dnew -infiles dset0.nii

    -rm_ext INDEX     : remove the extension given by INDEX

       This option is used to remove any single extension from the
       dataset.  Multiple extensions require multiple options.

       notes  - extension indices begin with 0 (zero)
              - to view the current extensions, see '-disp_exts'
              - all exensions can be removed using ALL or -1 for INDEX

       e.g. to remove the extension #0:
       nifti_tool -rm_ext 0 -overwrite -infiles dset0.nii

       e.g. to remove ALL extensions:
       nifti_tool -rm_ext ALL -prefix dset1 -infiles dset0.nii
       nifti_tool -rm_ext -1  -prefix dset1 -infiles dset0.nii

       e.g. to remove the extensions #2, #3 and #5:
       nifti_tool -rm_ext 2 -rm_ext 3 -rm_ext 5 -overwrite \
                  -infiles dset0.nii

  ------------------------------

  options for showing differences:

    -diff_hdr         : display header field diffs between two datasets

       This option is used to find differences between two datasets.
       If any fields are different, the contents of those fields is
       displayed (unless the '-quiet' option is used).

       A list of fields can be specified by using multiple '-field'
       options.  If no '-field' option is given, all fields will be
       checked.

       Exactly two dataset names must be provided via '-infiles'.

       e.g. to display all nifti_1_header field differences:
       nifti_tool -diff_hdr -infiles dset0.nii dset1.nii

       e.g. to display selected nifti_1_header field differences:
       nifti_tool -diff_hdr -field dim -field intent_code  \
                  -infiles dset0.nii dset1.nii 

    -diff_nim         : display nifti_image field diffs between datasets

       This option works the same as '-diff_hdr', except that the fields
       in question are from the nifti_image structure.

  ------------------------------

  miscellaneous options:

    -debug LEVEL      : set the debugging level

       Level 0 will attempt to operate with no screen output, but errors.
       Level 1 is the default.
       Levels 2 and 3 give progressively more infomation.

       e.g. -debug 2

    -field FIELDNAME  : provide a field to work with

       This option is used to provide a field to display, modify or
       compare.  This option can be used along with one of the action
       options presented above.

       See '-disp_hdr', above, for complete examples.

       e.g. nifti_tool -field descrip
       e.g. nifti_tool -field descrip -field dim

    -infiles file0... : provide a list of files to work with

       This parameter is required for any of the actions, in order to
       provide a list of files to process.  If input filenames do not
       have an extension, the directory we be searched for any
       appropriate files (such as .nii or .hdr).

       Note: if the filename has the form MAKE_IM, then a new dataset
       will be created, without the need for file input.

       See '-mod_hdr', above, for complete examples.

       e.g. nifti_tool -infiles file0.nii
       e.g. nifti_tool -infiles file1.nii file2 file3.hdr

    -mod_field NAME 'VALUE_LIST' : provide new values for a field

       This parameter is required for any the modification actions.
       If the user wants to modify any fields of a dataset, this is
       where the fields and values are specified.

       NAME is a field name (in either the nifti_1_header structure or
       the nifti_image structure).  If the action option is '-mod_hdr',
       then NAME must be the name of a nifti_1_header field.  If the
       action is '-mod_nim', NAME must be from a nifti_image structure.

       VALUE_LIST must be one or more values, as many as are required
       for the field, contained in quotes if more than one is provided.

       Use 'nifti_tool -help_hdr' to get a list of nifti_1_header fields
       Use 'nifti_tool -help_nim' to get a list of nifti_image fields

       See '-mod_hdr', above, for complete examples.

       e.g. modifying nifti_1_header fields:
            -mod_field descrip 'toga, toga, toga'
            -mod_field qoffset_x 19.4 -mod_field qoffset_z -11
            -mod_field pixdim '1 0.9375 0.9375 1.2 1 1 1 1'

    -keep_hist         : add the command as COMMENT (to the 'history')

        When this option is used, the current command will be added
        as a NIFTI_ECODE_COMMENT type extension.  This provides the
        ability to keep a history of commands affecting a dataset.

       e.g. -keep_hist

    -overwrite        : any modifications will be made to input files

       This option is used so that all field modifications, including
       extension additions or deletions, will be made to the files that
       are input.

       In general, the user is recommended to use the '-prefix' option
       to create new files.  But if overwriting the contents of the
       input files is prefered, this is how to do it.

       See '-mod_hdr' or '-add_afni_ext', above, for complete examples.

       e.g. -overwrite

    -prefix           : specify an output file to write change into

       This option is used to specify an output file to write, after
       modifications have been made.  If modifications are being made,
       then either '-prefix' or '-overwrite' is required.

       If no extension is given, the output extension will be '.nii'.

       e.g. -prefix new_dset
       e.g. -prefix new_dset.nii
       e.g. -prefix new_dset.hdr

    -quiet            : report only errors or requested information

       This option is equivalent to '-debug 0'.

  ------------------------------

  basic help options:

    -help             : show this help

       e.g.  nifti_tool -help

    -help_hdr         : show nifti_1_header field info

       e.g.  nifti_tool -help_hdr

    -help_nim         : show nifti_image field info

       e.g.  nifti_tool -help_nim

    -help_ana         : show nifti_analyze75 field info

       e.g.  nifti_tool -help_ana

    -help_datatypes [TYPE] : display datatype table

       e.g.  nifti_tool -help_datatypes
       e.g.  nifti_tool -help_datatypes N

       This displays the contents of the nifti_type_list table.
       An additional 'D' or 'N' parameter will restrict the type
       name to 'DT_' or 'NIFTI_TYPE_' names, 'T' will test.

    -ver              : show the program version number

       e.g.  nifti_tool -ver

    -hist             : show the program modification history

       e.g.  nifti_tool -hist

    -nifti_ver        : show the nifti library version number

       e.g.  nifti_tool -nifti_ver

    -nifti_hist       : show the nifti library modification history

       e.g.  nifti_tool -nifti_hist

    -with_zlib        : print whether library was compiled with zlib

       e.g.  nifti_tool -with_zlib

  ------------------------------

  R. Reynolds
  compiled: Mar 13 2009
  version 1.22 (Oct 8, 2008)




AFNI program: nsize
Usage: nsize image_in image_out
  Zero pads 'image_in' to NxN, N=64,128,256,512, or 1024, 
  whichever is the closest size larger than 'image_in'.
  [Works only for byte and short images.]



AFNI program: option_list.py
------ possible input options ------ opt 00: -a                  
------ possible input options ------ opt 01: -dsets              
------ possible input options ------ opt 02: -debug              
------ possible input options ------ opt 03: -c                  
------ possible input options ------ opt 04: -d                  
------ possible input options ------ opt 05: -e                  
------ found options ------ opt 00: trailers            
------ found options ------ opt 01: -a                  
------ found options ------ opt 02: -debug              
------ found options ------ opt 03: -c                  
------ found options ------ opt 04: -e                  



AFNI program: plugout_drive
Usage: plugout_drive [-host name] [-v]
This program connects to AFNI and sends commands
 that the user specifies interactively or on command line
 over to AFNI to be executed.

Options:
  -host name  Means to connect to AFNI running on the computer
                'name' using TCP/IP.  The default is to connect
                on the current host 'localhost' using TCP/IP.
  -shm        Means to connect to the current host using shared
                memory.  There is no reason to do this unless
                you are transferring huge quantities of data.
                N.B.:  '-host .' is equivalent to '-shm'.
  -v          Verbose mode.
  -port pp    Use TCP/IP port number 'pp'.  The default is
                8099, but if two plugouts are running on the
                same computer, they must use different ports.
  -name sss   Use the string 'sss' for the name that AFNI assigns
                to this plugout.  The default is something stupid.
  -com 'ACTION DATA'  Execute the following command. For example:
                       -com 'SET_FUNCTION SomeFunction'
                       will switch AFNI's function (overlay) to
                       dataset with prefix SomeFunction. 
                      Make sure ACTION and DATA are together enclosed
                       in one pair of single quotes.
                      There are numerous actions listed in AFNI's
                       README.driver file.
                      You can use the option -com repeatedly. 
  -quit  Quit after you are done with all the -com commands.
         The default is for the program to wait for more
          commands to be typed at the terminal's prompt.

NOTE:
You will need to turn plugouts on in AFNI using one of the
following methods: 
 1. Including '-yesplugouts' as an option on AFNI's command line
 2. From AFNI: Define Datamode->Misc->Start Plugouts
 3. Set environment variable AFNI_YESPLUGOUTS to YES in .afnirc
Otherwise, AFNI won't be listening for a plugout connection.




AFNI program: plugout_ijk
Usage: plugout_ijk [-host name] [-v]
This program connects to AFNI and send (i,j,k)
dataset indices to control the viewpoint.

Options:
  -host name  Means to connect to AFNI running on the
                computer 'name' using TCP/IP.  The default is to
                connect on the current host using shared memory.
  -v          Verbose mode.
  -port pp    Use TCP/IP port number 'pp'.  The default is
                8009, but if two plugouts are running on the
                same computer, they must use different ports.
  -name sss   Use the string 'sss' for the name that AFNI assigns
                to this plugout.  The default is something stupid.



AFNI program: plugout_tt
Usage: plugout_tt [-host name] [-v]
This program connects to AFNI and receives notification
whenever the user changes Talairach coordinates.

Options:
  -host name  Means to connect to AFNI running on the
                computer 'name' using TCP/IP.  The default is to
                connect on the current host using shared memory.
  -ijk        Means to get voxel indices from AFNI, rather
                than Talairach coordinates.
  -v          Verbose mode: prints out lots of stuff.
  -port pp    Use TCP/IP port number 'pp'.  The default is
                8001, but if two copies of this are running on
                the same computer, they must use different ports.
  -name sss   Use the string 'sss' for the name that AFNI assigns
                to this plugout.  The default is something stupid.



AFNI program: python_module_test.py

===========================================================================
python_module_test.py   - test the loading of python modules

   The default behavior of this program is to verify whether a 'standard'
   list of python modules can be loaded.  The 'standard' list amounds to
   what is needed for the python programs in AFNI.

   The user may specify a list of python modules to test.

------------------------------------------------------------
examples:

   a. Use the default behavior to test modules in standard list.

      python_module_test.py

   b. Test a specific list of modules in verbose mode.

      python_module_test.py -test_modules sys os numpy scipy R wx -verb 2

   c. Show the python version and platform information.

      python_module_test.py -python_ver -platform_info

   d. Perform a complete test (applies commands a and c).

      python_module_test.py -full_test

------------------------------------------------------------
informational options:

   -help                        : display this help
   -hist                        : display the modification history
   -show_valid_opts             : display all valid options (short format)
   -ver                         : display the version number

----------------------------------------
other options:

   -full_test                   : perform all of the standard tests

      This option applies -platform_info, -python_ver and -test_defaults.

   -platform_info               : display system information

      Platform information can include the OS and version, along with the
      CPU type.

   -python_ver                  : display the version of python in use

      Show which version of python is being used by the software.

   -test_defaults               : test the default module list

      The default module list will include (hopefully) all python modules
      used by AFNI programs.

      Note that most programs will not need all of these python libraries.

    -test_modules MOD1 MOD2 ... : test the specified module list

      Perform the same test, but on the modules specified with this option.

    -verb LEVEL                 : specify a verbose level

----------------------------------------
R Reynolds  30 Oct 2008
===========================================================================




AFNI program: quickspec

Usage:  quickspec 
        <-tn TYPE NAME> ...
        <-tsn TYPE STATE NAME> ...
        [<-spec specfile>] [-h/-help]
  Use this spec file for quick and dirty way of 
  loading a surface into SUMA or the command line programs.

Options:
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           SF: Caret/SureFit format
           BV: BrainVoyager format
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
   -spec specfile: Name of spec file output.
                   Default is quick.spec
                   The program will only overwrite 
                   quick.spec (the default) spec file.
   -h or -help: This message here.

  You can use any combinaton of -tn and -tsn options.
  Fields in the spec file that are (or cannot) be specified
  by this program are set to default values.

   This program was written to ward off righteous whiners and is
  not meant to replace the venerable @SUMA_Make_Spec_XX scripts.

++ SUMA version 2006_7_3

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009

      Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov 
		 Tue Dec 30




AFNI program: rmz
Usage: rmz [-q] [-#] filename ...
 -- Zeros out files before removing them



AFNI program: rotcom
Usage: rotcom '-rotate aaI bbR ccA -ashift ddS eeL ffP' [dataset]

Prints to stdout the 4x3 transformation matrix+vector that would be
applied by 3drotate to the given dataset.

The -rotate and -ashift options combined must be input inside single
quotes (i.e., as one long command string):
 * These options follow the same form as specified by '3drotate -help'.
 * That is, if you include the '-rotate' component, it must be followed
   by 3 angles.
 * If you include the '-ashift' component, it must be followed by 3 shifts;
 * For example, if you only want to shift in the 'I' direction, you could use
     '-ashift 10I 0 0'.
 * If you only want to rotate about the 'I' direction, you could use
     '-rotate 10I 0R 0A'.

Note that the coordinate order for the matrix and vector is that of
the dataset, which can be determined from program 3dinfo.  This is the
only function of the 'dataset' command line argument.

If no dataset is given, the coordinate order is 'RAI', which means:
    -x = Right      [and so +x = Left     ]
    -y = Anterior   [    so +y = Posterior]
    -z = Inferior   [    so +z = Superior ]
For example, the output of command
   rotcom '-rotate 10I 0R 0A'
is the 3 lines below:
0.984808 -0.173648  0.000000  0.000
0.173648  0.984808  0.000000  0.000
0.000000  0.000000  1.000000  0.000

-- RWCox - Nov 2002



AFNI program: rtfeedme
Usage: rtfeedme [options] dataset [dataset ...]
Test the real-time plugin by sending all the bricks in 'dataset' to AFNI.
 * 'dataset' may include a sub-brick selector list.
 * If more than one dataset is given, multiple channel acquisition
    will be simulated.  Each dataset must then have the same datum
    and dimensions.
 * If you put the flag '-break' between datasets, then the datasets
    in each group will be transmitted in parallel, but the groups
    will be transmitted serially (one group, then another, etc.).
    + For example:
        rtfeedme A+orig B+orig -break C+orig -break D+orig
       will send the A and B datasets in parallel, then send
       the C dataset separately, then send the D dataset separately.
       (That is, there will be 3 groups of datasets.)
    + There is a 1 second delay between the end transmission for
       a group and the start transmission for the next group.
    + You can extend the inter-group delay by using a break option
       of the form '-break_20' to indicate a 20 second delay.
    + Within a group, each dataset must have the same datum and
       same x,y,z,t dimensions.  (Different groups don't need to
       be conformant to each other.)
    + All the options below apply to each group of datasets;
       i.e., they will all get the same notes, drive commands, ....

Options:
  -host sname =  Send data, via TCP/IP, to AFNI running on the
                 computer system 'sname'.  By default, uses the
                 current system, and transfers data using shared
                 memory.  To send on the current system using
                 TCP/IP, use the system 'localhost'.

  -dt ms      =  Tries to maintain an inter-transmit interval of
                 'ms' milliseconds.  The default is to send data
                 as fast as possible.

  -3D         =  Sends data in 3D bricks.  By default, sends in
                 2D slices.

  -buf m      =  When using shared memory, sets the interprocess
                 communications buffer to 'm' megabytes.  Has no
                 effect if using TCP/IP.  Default is m=1.
                 If you use m=0, then a 50 Kbyte buffer is used.

  -verbose    =  Be talkative about actions.
  -swap2      =  Swap byte pairs before sending data.

  -nzfake nz  =  Send 'nz' as the value of nzz (for debugging).

  -drive cmd  =  Send 'cmd' as a DRIVE_AFNI command; e.g.,
                   -drive 'OPEN_WINDOW A.axialimage'
                 If cmd contains blanks, it must be in 'quotes'.
                 Multiple -drive options may be used.

  -note sss   =  Send 'sss' as a NOTE to the realtime plugin.
                 Multiple -note options may be used.

  -gyr v      =  Send value 'v' as the y-range for realtime motion
                 estimation graphing.



AFNI program: serial_helper
------------------------------------------------------------
/var/www/html/pub/dist/bin/linux_gcc32/serial_helper - pass motion parameters from socket to serial port

    This program is meant to receive registration (motion?)
    correction parameters from afni's realtime plugin, and to
    pass that data on to a serial port.

    The program is meant to run as a tcp server.  It listens
    for a connection, then processes data until a termination
    flag is received (sending data from the tcp socket to the
    serial port), closes the new connection, and goes back
    to a listening state.

    The basic outline is:

    open tcp server socket
    repeat forever:
        wait for a tcp client connection
        open a serial port
        while the client sends new data
            write that data to the serial port
        close the serial port and client socket

    The expected client is the realtime plugin to afni,
    plug_realtime.so.  If the afni user has their environment
    variable AFNI_REALTIME_MP_HOST_PORT set as HOST:PORT,
    then for EACH RUN, the realtime plugin will open a tcp
    connection to the given HOST and PORT, pass the magic hello
    data (0xabcdefab), pass the 6 motion parameters for each
    time point, and signal a closure by passing the magic bye
    data (0xdeaddead).

    On this server end, the 'repeat forever' loop will do the
    following.  First it will establish the connection by
    checking for the magic hello data.  If that data is found,
    the serial port will be opened.

    Then it will repeatedly check the incoming data for the
    magic bye data.  As long as that check fails, the data is
    assumed to be valid motion parameters.  And so 6 floats at a
    time are read from the incoming socket and passed to the
    serial port.

  usage: /var/www/html/pub/dist/bin/linux_gcc32/serial_helper [options] -serial_port FILENAME
------------------------------------------------------------
  examples:

    1. display this help :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -help

    2. display the module history :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -hist

    3. display the current version number :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -ver

  * 4. run normally, using the serial port file /dev/ttyS0 :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -serial_port /dev/ttyS0

  * 5. same as 4, but specify socket number 53214 :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -serial_port /dev/ttyS0 -sock_num 53214

    6. same as 5, but specify minmum and maximum bounds on
       the values :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper                       \
            -serial_port /dev/ttyS0            \
            -sock_num 53214                    \
            -mp_min -12.7                      \
            -mp_max  12.7

    7. run the program in socket test mode, without serial
       communication, and printing all the incoming data

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -no_serial -debug 3

    7a.run the program in socket test mode, without serial
       communication, and showing incoming via -disp_all
       (assumes real-time plugin mask has 2 voxels set)

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -no_serial -disp_all 2

    8. same as 4, but use debug level 3 to see the parameters
       that will be passed on, and duplicate all output to the
       file, helper.output

       note: this command is for the t-shell, and will not work
             under bash (for bash do the 2>&1 thingy...)

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -serial_port /dev/ttyS0 -debug 3 |& tee helper.out

    9. same as 4, but will receive 3 extra floats per TR

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -serial_port /dev/ttyS0 -num_extra 3

 * See 'example F' from 'Dimon -help' for a complete real-time
   testing example.

------------------------------------------------------------
  program setup:

    1. Start '/var/www/html/pub/dist/bin/linux_gcc32/serial_helper' on the computer with the serial port that
       the motion parameters should be written to.  Example 3
       is the most likely case, though it might be useful to
       use example 8.

    2. On the computer which will be used to run 'afni -rt',
       set the environment variable AFNI_REALTIME_MP_HOST_PORT
       to the appropriate host:port pair.  See the '-sock_num'
       option below for more details.

       This variable can also be set in the ~/.cshrc file, or
       as part of the AFNI environment via the ~/.afnirc file.

    3. Start 'afni -rt'.  Be sure to request 'realtime' graphing
       of the '3D: realtime' Registration parameters.

    4. Start receiving data (sending it to the realtime plugin).

       Note that for testing purposes, I may work well to get a
       set of I-files (say, in directories 003, 023, etc.), and
       to use Imon to send not-so-real-time data to afni.  An
       example of Imon for this purpose might be:

           Imon -start_dir 003 -quit -rt -host localhost

       See 'Imon -help' for more information.

------------------------------------------------------------
 HELLO versions:

    The version number is computed by subtracting 0xab from the
    last byte of the HELLO string (so that the default HELLO
    string means version 0).

    version 0: This is the default, which means serial_helper
               must be told what to expect from the real-time
               plugin via -num_extra or -disp_all.

    version 1: A 4-byte int will follow the HELLO string.  This
               number will be used as with -num_extra.

    version 2: A 4-byte int will follow the HELLO string.  This
               number will be used as with -disp_all.

    These versions can change with each new HELLO string.

------------------------------------------------------------
  'required' parameter:

    -serial_port FILENAME : specify output serial port
                          : -serial_port /dev/ttyS0

        If the user is not using any of the 'special' options,
        below, then this parameter is required.

        The FILENAME is the device file for the serial port
        which will be used for output.
------------------------------
  special options (for information or testing):

    -help            : show this help information

    -hist            : show the module history

    -debug LEVEL     : set the debugging level to LEVEL
                     : e.g. -debug 2
                     : default is 0, max is 3

    -no_serial       : turn of serial port output

        This option is used for testing the incoming data,
        when output to a serial port is not desired.  The
        program will otherwise operate normally.

    -version         : show the current version number
------------------------------
  'normal' options:

    -mp_max MAX_VAL  : limit the maximum value of the MP data
                     : e.g. -mp_max 12.7
                     : default is 12.7

        If any incoming data is greater than this value, it will
        be set to this value.  The default of 12.7 is used to
        scale incoming floats to signed bytes.

    -mp_min MIN_VAL  : limit the minimum value of the MP data
                     : e.g. -mp_min -12.7
                     : default is -12.7

        If any incoming data is less than this value, it will
        be set to this value.  The default of -12.7 is used to
        scale incoming floats to signed bytes.

    -show_times      : show communication times
                     : e.g. -show_times

        Each time data is recived, display the current time.
        Time is at millisecond resolution, and wraps per hour.

    -sock_num SOCK   : specify socket number to serve
                     : e.g. -sock_num 53214
                     : default is 53214

        This is the socket the program will use to listen for
        new connections.  This is the socket number that should
        be provided to the realtime plugin via the environment
        variable, AFNI_REALTIME_MP_HOST_PORT.

        On the machine the user run afni from, that environment
        variable should have the form HOST:PORT, where a basic
        example might be localhost:53214.

    -num_extra NVALS : will receive NVALS extra floats per TR
                     : e.g. -num_extra 5
                     : default is 0

        Extra floats may arrive if, for instance, afni's RT
        plugin has a mask with 3 ROIs in it (numbered 1,2,3).
        The plugin would compute averages over each ROI per TR,
        and send that data after the MP vals.

        In such a case, specify '-num_extra 3', so the program
        knows 3 floats will be received after the MP data.

        Note that -disp_all cannot be used with -num_extra.

    -disp_all NVOX   : will receive NVOX*8 extra floats per TR
                     : e.g. -disp_all 5
                     : default is 0

        Similar to -num_extra, here the program expect data on
        a per voxel basis, not averaged over ROIs.

        Here the users specifies the number of voxels for which
        ALL_DATA will be sent (to serial_helper).  The 8 values
        per voxel are (still in float):

            index  i  j  k  x  y  z data_value

        Currently, serial_helper will output this inforamtion
        simply as 1 row per voxel.

        Note that -disp_all cannot be used with -num_extra.

------------------------------------------------------------
  Authors: R. Reynolds, T. Ross  (March, 2004)
------------------------------------------------------------



AFNI program: siemens_vision
Usage: siemens_vision [options] filename ...
Prints out information from the Siemens .ima file header(s).

The only option is to rename the file according to the
TextImageNumber field stored in the header.  The option is:

  -rename ppp

which will rename each file to the form 'ppp.nnnn.ima',
where 'nnnn' is the image number expressed with 4 digits.

When '-rename' is used, the header info from the input files
will not be printed.



AFNI program: sqwave
Usage: /var/www/html/pub/dist/bin/linux_gcc32/sqwave [-on #] [-off #] [-length #] [-cycles #]
      [-init #] [-onkill #] [-offkill #] [-initkill #] [-name name]



AFNI program: strblast
Usage: strblast [options] TARGETSTRING filename ...
Finds exact copies of the target string in each of
the input files, and replaces all characters with
some junk string.

options:

  -help              : show this help

  -new_char CHAR     : replace TARGETSTRING with CHAR (repeated)

      This option is used to specify what TARGETSTRING is
      replaced with.  In this case, replace it with repeated
      copies of the character CHAR.

  -new_string STRING : replace TARGETSTRING with STRING

      This option is used to specify what TARGETSTRING is
      replaced with.  In this case, replace it with the string
      STRING.  If STRING is not long enough, then CHAR from the
      -new_char option will be used to complete the overwrite
      (or the character 'x', by default).

  -unescape          : parse TARGETSTRING for escaped characters
                       (includes '\t', '\n', '\r')

      If this option is given, strblast will parse TARGETSTRING
      replacing any escaped characters with their encoded ASCII
      values.

Examples:
  strings I.001 | more # see if Subject Name is present
  strblast 'Subject Name' I.*

  strblast -unescape "END OF LINE\n"       infile.txt
  strblast -new_char " " "BAD STRING"      infile.txt
  strblast -new_string "GOOD" "BAD STRING" infile.txt

Notes and Warnings:
  * strblast will modify the input files irreversibly!
      You might want to test if they are still usable.
  * strblast reads files into memory to operate on them.
      If the file is too big to fit in memory, strblast
      will fail.
  * strblast  will do internal wildcard expansion, so
      if there are too many input files for your shell to
      handle, you can do something like
         strblast 'Subject Name' 'I.*'
      and strblast will expand the 'I.*' wildcard for you.



AFNI program: suma

Usage:  
 Mode 0: Just type suma to see some toy surface and play
         with the interface. Some surfaces are generated
         using T. Lewiner's MarchingCubes library. 
         Use '.' and ',' keys to cycle through surfaces.

 Mode 1: Using a spec file to specify surfaces
                suma -spec  
                     [-sv ] [-ah AfniHost]

   -spec : File containing surface specification. 
                      This file is typically generated by 
                      @SUMA_Make_Spec_FS (for FreeSurfer surfaces) or 
                      @SUMA_Make_Spec_SF (for SureFit surfaces). 
                      The Spec file should be located in the directory 
                      containing the surfaces.
   [-sv ]: Anatomical volume used in creating the surface 
                    and registerd to the current experiment's anatomical 
                    volume (using @SUMA_AlignToExperiment). 
                    This parameter is optional, but linking to AFNI is 
                    not possible without it.If you find the need for it 
                    (as some have), you can specify the SurfVol in the 
                    specfile. You can do so by adding the field 
                    SurfaceVolume to each surface in the spec file. 
                    In this manner, you can have different surfaces using
                    different surface volumes.
   [-ah AfniHost]: Name (or IP address) of the computer running AFNI. 
                     This parameter is optional, the default is localhost.
                     When both AFNI and SUMA are on the same computer, 
                     communication is through shared memory. 
                     You can turn that off by explicitly setting AfniHost
                     to 127.0.0.1
   [-niml]: Start listening for NIML-formatted elements.
   [-dev]: Allow access to options that are not well polished for
           mass consuption.

 Mode 2: Using -t_TYPE or -t* options to specify surfaces on command line.
         -sv, -ah, -niml and -dev are still applicable here. This mode 
         is meant to simplify the quick viewing of a surface model.
                suma [-i_TYPE surface] [-t* surface] 
         Surfaces specified on command line are place in a group
         called 'DefGroup'.
         If you specify nothing on command line, you will have a random
         surface created for you. Some of these surfaces are generated
         using Thomas Lewiner's sample volumes for creating isosurfaces.
         See suma -sources for a complete reference.

 Specifying input surfaces using -i or -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       byu: BYU format, ascii.
            Polygons with more than 3 edges are turned into
            triangles.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
       gii: GIFTI XML surface format.
 Note that if the surface filename has the proper extension, 
 it is enough to use the -i option and let the programs guess
 the type from the extension.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           BYU: byu format
           SF: Caret/SureFit format
           BV: BrainVoyager format
           GII: GIFTI format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.

 Modes 1 & 2: You can mix the two modes for loading surfaces but the -sv
              option may not be properly applied.
              If you mix these modes, you will have two groups of
              surfaces loaded into SUMA. You can switch between them
              using the 'Switch Group' button in the viewer controller.

   [-novolreg|-noxform]: Ignore any Rotate, Volreg, Tagalign, 
                or WarpDrive transformations present in 
                the Surface Volume.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 
   [-visuals] Shows the available glxvisuals and exits.
   [-version] Shows the current version number.
   [-environment] Shows a list of all environment variables, 
                  their default setting and your current setting.
                  The output can be used as a new .sumarc file.
                  Since it takes into consideration your own settings
                  this command can be used to update your .sumarc 
                  regularly with a csh command like this:
                  suma -environment > ~/sumarc && mv ~/sumarc ~/.sumarc
   [-latest_news] Shows the latest news for the current 
                  version of the entire SUMA package.
   [-all_latest_news] Shows the history of latest news.
   [-progs] Lists all the programs in the SUMA package.
   [-motif_ver] Displays the linked version of Motif.
   [-sources] Lists code sources used in parts of SUMA.
   [-help_nido] Help message for displayable objects of type NIDO

   For help on interacting with SUMA, press 'ctrl+h' with the mouse 
   pointer inside SUMA's window.
   For more help: http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm

   If you can't get help here, please get help somewhere.

   ++ SUMA version 2006_7_3
New Programs:
  + SurfDsetInfo: Program to display surface dataset information.
  + AnalyzeTrace: Program to analyze the output of -trace option.
  + DriveSuma: Program to control SUMA from the command line
  + imcat: Program to catenate images.
  + Surf2VolCoord: Surface-node to voxel correspondence.
  + SurfDist: Program to calculate internodal distances.
  + SpharmDeco: Spherical harmonics decomposition.
  + SpharmReco: Spherical harmonics reconstruction.
Modifications:
  + SUMA:
    o Addition of new Displayable Objects (DO)(ctrl+Alt+s)
    o Allow replacement of pre-loaded DO and Dsets
    o Support for .niml.dset as format for surface-based anlysis
    o High resolution image saving with ctrl+r
    o Bug fixes for support of niml dset format
    o Use of '[i]' to select node index from surface dset
    o Scroll lists for I T and B selectors in SUMA
    o Graphing of dset content with 'g'
    o Display of text and images, see suma -help_nido 
  + SurfDist:
    o Output of node path along with shortest distance.
  + ConvertDset:
    o Output of full dsets if needed
  + ROIgrow:
    o Grows regions separately, depending on labels.
  + ROI2dataset:
    o outputs full datasets if needed.
  + SurfSmooth:
    o Improved HEAT_05 method.
    o New 'blurring to' a FWHM with HEAT_07 method.
  + SurfFWHM:
    o Estimating FWHM on the surface.
  + MapIcosahedron:
    o Better handling of surface centers. 

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Mar 13 2009



    Ziad S. Saad SSCC/NIMH/NIH saadz@mail.nih.gov 




AFNI program: suma_change_spec
Unknown option: help
suma_change_spec:
 This program changes SUMA's surface specification (Spec) files.
 At minimum, the flags input and state are required.
Available flags:
  input: Which is the SUMA Spec file you want to change.
  state: The state within the Spec file you want to change.
  domainparent: The new Domain Parent for the state within the 
	Spec file you want to change.
  output: The name to which your new Spec file will be temporarily
	written to. (this flag is optional, if omitted the new Spec
	file will be temporarily written to 'input_file.change').
  remove: This flag will remove the automatically created backup.
  anatomical: This will add 'Anatomical = Y' to the selected
	SurfaceState.
Usage:
 This program will take the user given flags and create a spec file,
 named from the output flag or .change.  It will then take
 this new spec file and overwrite the original input file.  If the -remove
 flag is not used the original input file can be found at .bkp.
 If the -remove is used the .bkp file will be  automatically deleted.

 ex. suma_change_spec -input  -state  
	-domainparent  -anatomical



AFNI program: timing_tool.py

=============================================================================
timing_tool.py    - for manipulating and evaluating stimulus timing files
                    (-stim_times format: where each row is a separate run)

   This program is meant to work with ascii files containing rows of floats
   ('*' characters are ignored).  This is the format used by 3dDeconvolve
   with the -stim_times option.  Some timing files do not need evaluation,
   such as those where the timing is very consistent.  However, it may be
   important to examine those files from a random timing design.

   Recall that an ISI (inter-stimulus interval) is the interval of time
   between the end of one stimulus and start of the next.

   The basic program operations include:

       o reporting ISI statistics, such as min/mean/max values per run
       o reporting overall ISI statistics for a set of timing files
       o adding a constant offset to time
       o combining multiple timing files into 1 (like '1dcat' + sort)
       o appending additional timing runs (like 'cat')
       o sort times per row (though 3dDeconvolve does not require this)

   A sample stimulus timing file having 3 runs with 4 stimuli per run
   might look something like the following.  Note that the file does not
   imply the durations of the stimuli, except that stimuli are generally
   not allowed to overlap.

      17.3 24.0 66.0 71.6
      11.0 30.6 49.2 68.5
      19.4 28.7 53.8 69.4

   The program works on either a single timing element (which can be modified),
   or a list of them (which cannot be modified).  The only real use of a list
   of timing elements is to show statistics (via -multi_show_isi_stats).

--------------------------------------------------------------------------
examples:

   0. Basic commands:

         timing_tool.py -help
         timing_tool.py -hist
         timing_tool.py -show_valid_opts
         timing_tool.py -ver

   1. Combine the timing of 2 files (extend one timing by another and sort).
      Write to a new timing file.

         timing_tool.py -timing stimesB_01_houses.1D         \
                        -extend stimesB_02_faces.1D          \
                        -sort                                \
                        -write_timing stimesB_extended.1D

   2. Subtract 12 seconds from each stimulus time (to offset TRs dropped
      prior to the magnetization steady state).

         timing_tool.py -timing stimesB_01_houses.1D         \
                        -add_offset -12.0                    \
                        -write_timing stimesB1_offset12.1D

   3. Show timing statistics for the 3 timing files generated by example 3
      from "make_random_timing -help".  To be accurate, specify the run
      and stimulus durations.

         timing_tool.py -multi_timing stimesC_*.1D           \
                        -run_len 200 -multi_stim_dur 3.5     \
                        -multi_show_isi_stats

   4. Show timing statistics for the timing files generated by example 6
      from "make_random_timing -help".  Since both the run and stimulus
      durations vary, 4 run lengths and 3 stimulus durations are given.

         timing_tool.py -multi_timing stimesF_*.1D           \
                        -run_len 200 190 185 225             \
                        -multi_stim_dur 3.5 4.5 3            \
                        -multi_show_isi_stats

--------------------------------------------------------------------------
Notes:

   1. Action options are performed in the order of the options.  If
      the -chrono option is given, everything (but -chrono) is.

   2. Either -timing or -multi_timing is required for processing.

   3. Option -run_len applies to single or multiple stimulus classes.

--------------------------------------------------------------------------
basic informational options:

   -help                        : show this help
   -hist                        : show the module history
   -show_valid_opts             : show all valid options
   -ver                         : show the version number

------------------------------------------
single/multiple timing options:

   -timing TIMING_FILE          : specify a stimulus timing file to load

        e.g. -timing stimesB_01_houses.1D

        Use this option to specify a single stimulus timing file.  The user
        can modify this timing via some of the action options listed below.

   -show_isi_stats              : display timing and ISI statistics

        With this option, the program will display timing statistics for the
        single (possibly modified) timing element.

   -show_timing_ele             : display info on the main timing element

        With this option, the program will display information regarding the
        single (possibly modified) timing element.

   -stim_dur DURATION           : specify the stimulus duration, in seconds

        e.g. -stim_dur 3.5

        This option allows the user to specify the duration of the stimulus,
        as applies to the single timing element.  The only use of this is
        in conjunction with -show_isi_stats.

            Consider '-show_isi_stats' and '-run_len'.

   --------------------
        
   -multi_timing FILE1 FILE2 ... : specify multiple timing files to load

        e.g. -timing stimesB_*.1D

        Use this option to specify a list of stimulus timing files.  The user
        cannot modify this data, but can display the overall ISI statistics
        from it.

        Options that pertain to this timing list include:

            -multi_show_isi_stats
            -multi_show_timing_ele
            -multi_stim_dur
            -run_len

   -multi_show_isi_stats        : display timing and ISI statistics

        With this option, the program will display timing statistics for the
        multiple timing files.

   -multi_show_timing_ele       : display info on the multiple timing elements

        With this option, the program will display information regarding the
        multiple timing element list.

   -multi_stim_dur DUR1 ...     : specify the stimulus duration(s), in seconds

        e.g. -multi_stim_dur 3.5
        e.g. -multi_stim_dur 3.5 4.5 3

        This option allows the user to specify the durations of the stimulus
        classes, as applies to the multiple timing elements.  The only use of
        this is in conjunction with -multi_show_isi_stats.

        If only one duration is specified, it is applied to all elements.
        Otherwise, there should be as many stimulus durations as files
        specified with -multi_timing.

            Consider '-multi_show_isi_stats' and '-run_len'.

------------------------------------------
action options (apply to single timing element, only):

   ** Note that these options are processed in the order they are read.
      See '-chrono' for similar notions.

   -add_offset OFFSET           : add OFFSET to every time in main element

        e.g. -add_offset -12.0

        Use this option to add a single offset to all of the times in the main
        timing element.  For example, if the user deletes 3 4-second TRs from
        the EPI data, they may wish to subtract 12 seconds from every stimulus
        time, so that the times match the modifed EPI data.

            Consider '-write_timing'.

   -add_rows NEW_FILE           : append these timing rows to main element

        e.g. -add_rows more_times.1D

        Use this option to append rows from NEW_FILE to those of the main
        timing element.  If the user then wrote out the result, it would be
        identical to using cat: "cat times1.txt times2.txt > both_times.txt".

            Consider '-write_timing'.

   -extend NEW_FILE             : extend the timing rows with those in NEW_FILE

        e.g. -extend more_times.1D

        Use this option to extend each row (run) with the times in NEW_FILE.
        This has an effect similar to that of '1dcat'.  Sorting the times is
        optional, done via '-sort'.  Note that 3dDeconvolve does not need the
        times to be sorted, though it is more understandable to the user.

            Consider '-sort' and '-write_timing'.

   -show_timing                 : display the current single timing data

        This prints the current (possibly modified) single timing data to the
        terminal.  If the user is making multiple modifications to the timing
        data, they may wish to display the updated timing after each step.

   -sort                        : sort the times, per row (run)

        This will cause each row (run) of the main timing element to be
        sorted (from smallest to largest).  Such a step may be highly desired
        after using '-extend', or after some external manipulation that causes
        the times to be unsorted.

        Note that 3dDeconvolve does not require sorted timing.

            Consider '-write_timing'.

   -transpose                   : transpose the data (only if rectangular)

        This works exactly like 1dtranspose, and requires each row to have
        the same number of entries (rectangular data).  The first row would
        be swapped with the first column, etc.

            Consider '-write_timing'.

   -write_timing NEW_FILE       : write the current timing to a new file

        e.g. -write_timing new_times.1D

        After modifying the timing data, the user will probably want to write
        out the result.  Alternatively, the user could use -show_timing and
        cut-and-paste to write such a file.

------------------------------------------
general options:

   -chrono                      : process options chronologically

        While the action options are already processed in order, general and
        -timing options are not, unless the chrono option is given.  This 
        allows one to do things like scripting a sequence of operations
        within a single command.

   -nplaces NPLACES             : specify # decimal places used in printing

        e.g. -nplaces 1

        This option allows the user to specify the number of places to the
        right of the decimal that are used when printing a stimulus time
        (to the screen via -show_timing or to a file via -write_timing).
        The default is 3.

            Consider '-show_timing' and '-write_timing'.

   -run_len RUN_TIME ...        : specify the run duration(s), in seconds

        e.g. -run_len 300
        e.g. -run_len 300 320 280 300

        This option allows the user to specify the duration of each run.
        If only one duration is provided, it is assumed that all runs are of
        that length of time.  Otherwise, the user must specify the same number
        of runs that are found in the timing files (one run per row).

        This option applies to both -timing and -multi_timing files.

        The run durations only matter for displaying ISI statistics.

            Consider '-show_isi_stats' and '-multi_show_isi_stats'.

   -verb LEVEL                  : set the verbosity level

        e.g. -verb 3

        This option allows the user to specify how verbose the program is.
        The default level is 1, 0 is quiet, and the maximum is (currently) 4.

-----------------------------------------------------------------------------
R Reynolds    December 2008
=============================================================================




AFNI program: to3d
++ to3d: AFNI version=AFNI_2008_07_18_1710 (Mar 13 2009) [32-bit]
++ Authored by: RW Cox
Usage: to3d [options] image_files ...
       Creates 3D datasets for use with AFNI from 2D image files

The available options are
  -help   show this message
  -'type' declare images to contain data of a given type
          where 'type' is chosen from the following options:
       ANATOMICAL TYPES
         spgr == Spoiled GRASS
          fse == Fast Spin Echo
         epan == Echo Planar
         anat == MRI Anatomy
           ct == CT Scan
         spct == SPECT Anatomy
          pet == PET Anatomy
          mra == MR Angiography
         bmap == B-field Map
         diff == Diffusion Map
         omri == Other MRI
         abuc == Anat Bucket
       FUNCTIONAL TYPES
          fim == Intensity
         fith == Inten+Thr
         fico == Inten+Cor
         fitt == Inten+Ttest
         fift == Inten+Ftest
         fizt == Inten+Ztest
         fict == Inten+ChiSq
         fibt == Inten+Beta
         fibn == Inten+Binom
         figt == Inten+Gamma
         fipt == Inten+Poisson
         fbuc == Func-Bucket
                 [for paired (+) types above, images are fim first,]
                 [then followed by the threshold (etc.) image files]

  -statpar value value ... value [* NEW IN 1996 *]
     This option is used to supply the auxiliary statistical parameters
     needed for certain dataset types (e.g., 'fico' and 'fitt').  For
     example, a correlation coefficient computed using program 'fim2'
     from 64 images, with 1 ideal, and with 2 orts could be specified with
       -statpar 64 1 2

  -prefix  name      will write 3D dataset using prefix 'name'
  -session name      will write 3D dataset into session directory 'name'
  -geomparent fname  will read geometry data from dataset file 'fname'
                       N.B.: geometry data does NOT include time-dependence
  -anatparent fname  will take anatomy parent from dataset file 'fname'

  -nosave  will suppress autosave of 3D dataset, which normally occurs
           when the command line options supply all needed data correctly

  -view type [* NEW IN 1996 *]
    Will set the dataset's viewing coordinates to 'type', which
    must be one of these strings:  orig acpc tlrc

TIME DEPENDENT DATASETS [* NEW IN 1996 *]
  -time:zt nz nt TR tpattern  OR  -time:tz nt nz TR tpattern

    These options are used to specify a time dependent dataset.
    '-time:zt' is used when the slices are input in the order
               z-axis first, then t-axis.
    '-time:tz' is used when the slices are input in the order
               t-axis first, then z-axis.

    nz  =  number of points in the z-direction (minimum 1)
    nt  =  number of points in the t-direction
            (thus exactly nt * nz slices must be read in)
    TR  =  repetition interval between acquisitions of the
            same slice, in milliseconds (or other units, as given below)

    tpattern = Code word that identifies how the slices (z-direction)
               were gathered in time.  The values that can be used:

       alt+z = altplus   = alternating in the plus direction
       alt+z2            = alternating, starting at slice #1
       alt-z = altminus  = alternating in the minus direction
       alt-z2            = alternating, starting at slice #nz-2
       seq+z = seqplus   = sequential in the plus direction
       seq-z = seqminus  = sequential in the minus direction
       zero  = simult    = simultaneous acquisition
               @filename = read temporal offsets from 'filename'

    For example if nz = 5 and TR = 1000, then the inter-slice
    time is taken to be dt = TR/nz = 200.  In this case, the
    slices are offset in time by the following amounts:

                    S L I C E   N U M B E R
      tpattern        0    1    2    3    4  Comment
      ----------   ---- ---- ---- ---- ----  -------------------------------
      altplus         0  600  200  800  400  Alternating in the +z direction
      alt+z2        400    0  600  200  800  Alternating, but starting at #1
      altminus      400  800  200  600    0  Alternating in the -z direction
      alt-z2        800  200  600    0  400  Alternating, starting at #nz-2 
      seqplus         0  200  400  600  800  Sequential  in the +z direction
      seqminus      800  600  400  200    0  Sequential  in the -z direction
      simult          0    0    0    0    0  All slices acquired at once

    If @filename is used for tpattern, then nz ASCII-formatted numbers are
    read from the file.  These are used to indicate the time offsets (in ms)
    for each slice. For example, if 'filename' contains
       0 600 200 800 400
    then this is equivalent to 'altplus' in the above example.

    Notes:
      * Time-dependent functional datasets are not yet supported by
          to3d or any other AFNI package software.  For many users,
          the proper dataset type for these datasets is '-epan'.
      * Time-dependent datasets with more than one value per time point
          (e.g., 'fith', 'fico', 'fitt') are also not allowed by to3d.
      * If you use 'abut' to fill in gaps in the data and/or to
          subdivide the data slices, you will have to use the @filename
          form for tpattern, unless 'simult' or 'zero' is acceptable.
      * At this time, the value of 'tpattern' is not actually used in
          any AFNI program.  The values are stored in the dataset
          .HEAD files, and will be used in the future.
      * The values set on the command line can't be altered interactively.
      * The units of TR can be specified by the command line options below:
            -t=ms or -t=msec  -->  milliseconds (the default)
            -t=s  or -t=sec   -->  seconds
            -t=Hz or -t=Hertz -->  Hertz (for chemical shift images?)
          Alternatively, the units symbol ('ms', 'msec', 's', 'sec',
            'Hz', or 'Hertz') may be attached to TR in the '-time:' option,
            as in '-time:zt 16 64 4.0sec alt+z'
 ****** 15 Aug 2005 ******
      * Millisecond time units are no longer stored in AFNI dataset
          header files.  For backwards compatibility, the default unit
          of TR (i.e., without a suffix 's') is still milliseconds, but
          this value will be converted to seconds when the dataset is
          written to disk.  Any old AFNI datasets that have millisecond
          units for TR will be read in to all AFNI programs with the TR
          converted to seconds.

  -Torg ttt = set time origin of dataset to 'ttt' [default=0.0]

COMMAND LINE GEOMETRY SPECIFICATION [* NEW IN 1996 *]
   -xFOV   [dimen1][direc1]-[dimen2][direc2]
     or       or
   -xSLAB  [dimen1][direc1]-[direc2]

   (Similar -yFOV, -ySLAB, -zFOV and -zSLAB option are also present.)

 These options specify the size and orientation of the x-axis extent
 of the dataset.  [dimen#] means a dimension (in mm); [direc] is
 an anatomical direction code, chosen from
      A (Anterior)    P (Posterior)    L (Left)
      I (Inferior)    S (Superior)     R (Right)
 Thus, 20A-30P means that the x-axis of the input images runs from
 20 mm Anterior to 30 mm Posterior.  For convenience, 20A-20P can be
 abbreviated as 20A-P.

 -xFOV  is used to mean that the distances are from edge-to-edge of
          the outermost voxels in the x-direction.
 -xSLAB is used to mean that the distances are from center-to-center
          of the outermost voxels in the x-direction.

 Under most circumstance, -xFOV , -yFOV , and -zSLAB would be the
 correct combination of geometry specifiers to use.  For example,
 a common type of run at MCW would be entered as
    -xFOV 120L-R -yFOV 120A-P -zSLAB 60S-50I

 **NOTE WELL: -xFOV 240L-R does not mean a Field-of-View that is 240 mm
               wide!  It means one that stretches from 240R to 240L, and
               so is 480 mm wide.
              The 'FOV' indicates that this direction was acquired with
               with Fourier encoding, and so the distances are naturally
               specified from the edge of the volume.
              The 'SLAB' indicates that this direction was acquired with
               slice encoding (by the RF excitation), and so distances
               are naturally specified by the center of the slices.
              For non-MRI data (e.g., CT), I'm not sure what the correct
               input format to use here would be -- be careful out there!

Z-AXIS SLICE OFFSET ONLY
 -zorigin distz  Puts the center of the 1st slice off at the
                 given distance ('distz' in mm).  This distance
                 is in the direction given by the corresponding
                 letter in the -orient code.  For example,
                   -orient RAI -zorigin 30
                 would set the center of the first slice at
                 30 mm Inferior.
    N.B.: This option has no effect if the FOV or SLAB options
          described above are used.

INPUT IMAGE FORMATS [* SIGNIFICANTLY CHANGED IN 1996 *]
  Image files may be single images of unsigned bytes or signed shorts
  (64x64, 128x128, 256x256, 512x512, or 1024x1024) or may be grouped
  images (that is, 3- or 4-dimensional blocks of data).
  In the grouped case, the string for the command line file spec is like

    3D:hglobal:himage:nx:ny:nz:fname   [16 bit input]
    3Ds:hglobal:himage:nx:ny:nz:fname  [16 bit input, swapped bytes]
    3Db:hglobal:himage:nx:ny:nz:fname  [ 8 bit input]
    3Di:hglobal:himage:nx:ny:nz:fname  [32 bit input]
    3Df:hglobal:himage:nx:ny:nz:fname  [floating point input]
    3Dc:hglobal:himage:nx:ny:nz:fname  [complex input]
    3Dd:hglobal:himage:nx:ny:nz:fname  [double input]

  where '3D:' or '3Ds': signals this is a 3D input file of signed shorts
        '3Db:'          signals this is a 3D input file of unsigned bytes
        '3Di:'          signals this is a 3D input file of signed ints
        '3Df:'          signals this is a 3D input file of floats
        '3Dc:'          signals this is a 3D input file of complex numbers
                         (real and imaginary pairs of floats)
        '3Dd:'          signals this is a 3D input file of double numbers
                         (will be converted to floats)
        hglobal = number of bytes to skip at start of whole file
        himage  = number of bytes to skip at start of each 2D image
        nx      = x dimension of each 2D image in the file
        ny      = y dimension of each 2D image in the file
        nz      = number of 2D images in the file
        fname   = actual filename on disk to read

  * The ':' separators are required.  The k-th image starts at
      BYTE offset hglobal+(k+1)*himage+vs*k*nx*ny in file 'fname'
      for k=0,1,...,nz-1.
  * Here, vs=voxel length=1 for bytes, 2 for shorts, 4 for ints and floats,
      and 8 for complex numbers.
  * As a special case, hglobal = -1 means read data starting at
      offset len-nz*(vs*nx*ny+himage), where len=file size in bytes.
      (That is, to read the needed data from the END of the file.)
  * Note that there is no provision for skips between data rows inside
      a 2D slice, only for skips between 2D slice images.
  * The int, float, and complex formats presume that the data in
      the image file are in the 'native' format for this CPU; that is,
      there is no provision for data conversion (unlike the 3Ds: format).
  * Double input will be converted to floats (or whatever -datum is)
      since AFNI doesn't support double precision datasets.
  * Whether the 2D image data is interpreted as a 3D block or a 3D+time
      block depends on the rest of the command line parameters.  The
      various 3D: input formats are just ways of inputting multiple 2D
      slices from a single file.
  * SPECIAL CASE: If fname is ALLZERO, then this means not to read
      data from disk, but instead to create nz nx*ny images filled
      with zeros.  One application of this is to make it easy to create
      a dataset of a specified geometry for use with other programs.

The 'raw pgm' image format is also supported; it reads data into 'byte' images.

* ANALYZE (TM) .hdr/.img files can now be read - give the .hdr filename on
  the command line.  The program will detect if byte-swapping is needed on
  these images, and can also set the voxel grid sizes from the first .hdr file.
  If the 'funused1' field in the .hdr is positive, it will be used to scale the
  input values.  If the environment variable AFNI_ANALYZE_FLOATIZE is YES, then
  .img files will be converted to floats on input.

* Siemens .ima image files can now be read.  The program will detect if
  byte-swapping is needed on these images, and can also set voxel grid
  sizes and orientations (correctly, I hope).
* Some Siemens .ima files seems to have their EPI slices stored in
  spatial order, and some in acquisition (interleaved) order.  This
  program doesn't try to figure this out.  You can use the command
  line option '-sinter' to tell the program to assume that the images
  in a single .ima file are interleaved; for example, if there are
  7 images in a file, then without -sinter, the program will assume
  their order is '0 1 2 3 4 5 6'; with -sinter, the program will
  assume their order is '0 2 4 6 1 3 5' (here, the number refers
  to the slice location in space).

* GEMS I.* (IMGF) 16-bit files can now be read. The program will detect
  if byte-swapping is needed on these images, and can also set voxel
  grid sizes and orientations.  It can also detect the TR in the
  image header.  If you wish to rely on this TR, you can set TR=0
  in the -time:zt or -time:tz option.
* If you use the image header's TR and also use @filename for the
  tpattern, then the values in the tpattern file should be fractions
  of the true TR; they will be multiplied by the true TR once it is
  read from the image header.

 NOTES:
  * Not all AFNI programs support all datum types.  Shorts and
      floats are safest. (See the '-datum' option below.)
  * If '-datum short' is used or implied, then int, float, and complex
      data will be scaled to fit into a 16 bit integer.  If the '-gsfac'
      option below is NOT used, then each slice will be SEPARATELY
      scaled according to the following choice:
      (a) If the slice values all fall in the range -32767 .. 32767,
          then no scaling is performed.
      (b) Otherwise, the image values are scaled to lie in the range
          0 .. 10000 (original slice min -> 0, original max -> 10000).
      This latter option is almost surely not what you want!  Therefore,
      if you use the 3Di:, 3Df:, or 3Dc: input methods and store the
      data as shorts, I suggest you supply a global scaling factor.
      Similar remarks apply to '-datum byte' scaling, with even more force.
  * To3d now incoporates POSIX filename 'globbing', which means that
      you can input filenames using 'escaped wildcards', and then to3d
      will internally do the expansion to the list of files.  This is
      only desirable because some systems limit the number of command-line
      arguments to a program.  It is possible that you would wish to input
      more slice files than your computer supports.  For example,
          to3d exp.?.*
      might overflow the system command line limitations.  The way to do
      this using internal globbing would be
          to3d exp.\?.\*
      where the \ characters indicate to pass the wildcards ? and *
      through to the program, rather than expand them in the shell.
      (a) Note that if you choose to use this feature, ALL wildcards in
          a filename must be escaped with \ or NONE must be escaped.
      (b) Using the C shell, it is possible to turn off shell globbing
          by using the command 'set noglob' -- if you do this, then you
          do not need to use the \ character to escape the wildcards.
      (c) Internal globbing of 3D: file specifiers is supported in to3d.
          For example, '3D:0:0:64:64:100:sl.\*' could be used to input
          a series of 64x64x100 files with names 'sl.01', 'sl.02' ....
          This type of expansion is specific to to3d; the shell will not
          properly expand such 3D: file specifications.
      (d) In the C shell (csh or tcsh), you can use forward single 'quotes'
          to prevent shell expansion of the wildcards, as in the command
              to3d '3D:0:0:64:64:100:sl.*'
    The globbing code is adapted from software developed by the
    University of California, Berkeley, and is copyrighted by the
    Regents of the University of California (see file mcw_glob.c).

RGB datasets [Apr 2002]
-----------------------
You can now create RGB-valued datasets.  Each voxel contains 3 byte values
ranging from 0..255.  RGB values may be input to to3d in one of two ways:
 * Using raw PPM formatted 2D image files.
 * Using JPEG formatted 2D files.
 * Using TIFF, BMP, GIF, PNG formatted 2D files [if netpbm is installed].
 * Using the 3Dr: input format, analogous to 3Df:, etc., described above.
RGB datasets can be created as functional FIM datasets, or as anatomical
datasets:
 * RGB fim overlays are transparent in AFNI only where all three
    bytes are zero - that is, you can't overlay solid black.
 * At present, there is limited support for RGB datasets.
    About the only thing you can do is display them in 2D slice
    viewers in AFNI.
You can also create RGB-valued datasets using program 3dThreetoRGB.

Other Data Options
------------------
  -2swap
     This option will force all input 2 byte images to be byte-swapped
     after they are read in.
  -4swap
     This option will force all input 4 byte images to be byte-swapped
     after they are read in.
  -8swap
     This option will force all input 8 byte images to be byte-swapped
     after they are read in.
  BUT PLEASE NOTE:
     Input images that are auto-detected to need byte-swapping
     (GEMS I.*, Siemens *.ima, ANALYZE *.img, and 3Ds: files)
     will NOT be swapped again by one of the above options.
     If you want to swap them again for some bizarre reason,
     you'll have to use the 'Byte Swap' button on the GUI.
     That is, -2swap/-4swap will swap bytes on input files only
     if they haven't already been swapped by the image input
     function.

  -zpad N   OR
  -zpad Nmm 
     This option tells to3d to write 'N' slices of all zeros on each side
     in the z-direction.  This will make the dataset 'fatter', but make it
     simpler to align with datasets from other scanning sessions.  This same
     function can be accomplished later using program 3dZeropad.
   N.B.: The zero slices will NOT be visible in the image viewer in to3d, but
          will be visible when you use AFNI to look at the dataset.
   N.B.: If 'mm' follows the integer N, then the padding is measured in mm.
          The actual number of slices of padding will be rounded up.  So if
          the slice thickness is 5 mm, then '-zpad 16mm' would be the equivalent
          of '-zpad 4' -- that is, 4 slices on each z-face of the volume.
   N.B.: If the geometry parent dataset was created with -zpad, the spatial
          location (origin) of the slices is set using the geometry dataset's
          origin BEFORE the padding slices were added.  This is correct, since
          you need to set the origin on the current dataset as if the padding
          slices were not present.
   N.B.: Unlike the '-zpad' option to 3drotate and 3dvolreg, this adds slices
          only in the z-direction.
   N.B.: You can set the environment variable 'AFNI_TO3D_ZPAD' to provide a
          default for this option.

  -gsfac value
     will scale each input slice by 'value'.  For example,
     '-gsfac 0.31830989' will scale by 1/Pi (approximately).
     This option only has meaning if one of '-datum short' or
     '-datum byte' is used or implied.  Otherwise, it is ignored.

  -datum type
     will set the voxel data to be stored as 'type', which is currently
     allowed to be short, float, byte, or complex.
     If -datum is not used, then the datum type of the first input image
     will determine what is used.  In that case, the first input image will
     determine the type as follows:
        byte       --> byte
        short      --> short
        int, float --> float
        complex    --> complex
     If -datum IS specified, then all input images will be converted
     to the desired type.  Note that the list of allowed types may
     grow in the future, so you should not rely on the automatic
     conversion scheme.  Also note that floating point datasets may
     not be portable between CPU architectures.

  -nofloatscan
     tells to3d NOT to scan input float and complex data files for
     illegal values - the default is to scan and replace illegal
     floating point values with zeros (cf. program float_scan).

  -in:1
     Input of huge 3D: files (with all the data from a 3D+time run, say)
     can cause to3d to fail from lack of memory.  The reason is that
     the images are from a file are all read into RAM at once, and then
     are scaled, converted, etc., as needed, then put into the final
     dataset brick.  This switch will cause the images from a 3D: file
     to be read and processed one slice at a time, which will lower the
     amount of memory needed.  The penalty is somewhat more I/O overhead.

NEW IN 1997:
  -orient code
     Tells the orientation of the 3D volumes.  The code must be 3 letters,
     one each from the pairs {R,L} {A,P} {I,S}.  The first letter gives
     the orientation of the x-axis, the second the orientation of the
     y-axis, the third the z-axis:
        R = right-to-left         L = left-to-right
        A = anterior-to-posterior P = posterior-to-anterior
        I = inferior-to-superior  S = superior-to-inferior
     Note that the -xFOV, -zSLAB constructions can convey this information.

NEW IN 2001:
  -skip_outliers
     If present, this tells the program to skip the outlier check that is
     automatically performed for 3D+time datasets.  You can also turn this
     feature off by setting the environment variable AFNI_TO3D_OUTLIERS
     to "No".
  -text_outliers
    If present, tells the program to only print out the outlier check
     results in text form, not graph them.  You can make this the default
     by setting the environment variable AFNI_TO3D_OUTLIERS to "Text".
    N.B.: If to3d is run in batch mode, then no graph can be produced.
          Thus, this option only has meaning when to3d is run with the
          interactive graphical user interface.
  -save_outliers fname
    Tells the program to save the outliers count into a 1D file with
    name 'fname'.  You could graph this file later with the command
       1dplot -one fname
    If this option is used, the outlier count will be saved even if
    nothing appears 'suspicious' (whatever that means).
  NOTES on outliers:
    * See '3dToutcount -help' for a description of how outliers are
       defined.
    * The outlier count is not done if the input images are shorts
       and there is a significant (> 1%) number of negative inputs.
    * There must be at least 6 time points for the outlier count to
       be carried out.

OTHER NEW OPTIONS:
  -assume_dicom_mosaic
    If present, this tells the program that any Siemens DICOM file
    is a potential MOSAIC image, even without the indicator string.
  -oblique_origin
    assume origin and orientation from oblique transformation matrix
    rather than traditional cardinal information (ignores FOV/SLAB
    options Sometimes useful for Siemens mosaic flipped datasets
  -reverse_list
    reverse the input file list.
    Convenience for Siemens non-mosaic flipped datasets


OPTIONS THAT AFFECT THE X11 IMAGE DISPLAY
   -gamma gg    the gamma correction factor for the
                  monitor is 'gg' (default gg is 1.0; greater than
                  1.0 makes the image contrast larger -- this may
                  also be adjusted interactively)
   -ncolors nn  use 'nn' gray levels for the image
                  displays (default is 80)
   -xtwarns     turn on display of Xt warning messages

++ Compile date = Mar 13 2009




AFNI program: waver
Usage: waver [options] > output_filename
Creates an ideal waveform timeseries file.
The output goes to stdout, and normally would be redirected to a file.

Options: (# refers to a number; [xx] is the default value)
  -WAV = Sets waveform to Cox special                    [default]
           cf. AFNI FAQ list for formulas:
           http://afni.nimh.nih.gov/afni/doc/faq/17
  -GAM = Sets waveform to form t^b * exp(-t/c)
           (cf. Mark Cohen)

  -EXPR "expression" = Sets waveform to the expression given,
                         which should depend on the variable 't'.
     e.g.: -EXPR "step(t-2)*step(12-t)*(t-2)*(12-t)"
     N.B.: The peak value of the expression on the '-dt' grid will
           be scaled to the value given by '-peak'; if this is not
           desired, set '-peak 0', and the 'natural' peak value of
           the expression will be used.

  -FILE dt wname = Sets waveform to the values read from the file
                   'wname', which should be a single column .1D file
                   (i.e., 1 ASCII number per line).  The 'dt value
                   is the time step (in seconds) between lines
                   in 'wname'; the first value will be at t=0, the
                   second at t='dt', etc.  Intermediate time values
                   will be linearly interpolated.  Times past the
                   the end of the 'wname' file length will have
                   the waveform value set to zero.
               *** N.B.: If the -peak option is used AFTER -FILE,
                         its value will be multiplied into the result.

These options set parameters for the -WAV waveform.
  -delaytime #   = Sets delay time to # seconds                [2]
  -risetime #    = Sets rise time to # seconds                 [4]
  -falltime #    = Sets fall time to # seconds                 [6]
  -undershoot #  = Sets undershoot to # times the peak         [0.2]
                     (this should be a nonnegative factor)
  -restoretime # = Sets time to restore from undershoot        [2]

These options set parameters for the -GAM waveform:
  -gamb #        = Sets the parameter 'b' to #                 [8.6]
  -gamc #        = Sets the parameter 'c' to #                 [0.547]
  -gamd #        = Sets the delay time to # seconds            [0.0]

These options apply to all waveform types:
  -peak #        = Sets peak value to #                        [100]
  -dt #          = Sets time step of output AND input          [0.1]
  -TR #          = '-TR' is equivalent to '-dt'

The default is just to output the waveform defined by the parameters
above.  If an input file is specified by one the options below, then
the timeseries defined by that file will be convolved with the ideal
waveform defined above -- that is, each nonzero point in the input
timeseries will generate a copy of the waveform starting at that point
in time, with the amplitude scaled by the input timeseries value.

  -xyout         = Output data in 2 columns:
                     1=time 2=waveform (useful for graphing)
                     [default is 1 column=waveform]

  -input infile  = Read timeseries from *.1D formatted 'infile';
                     convolve with waveform to produce output
              N.B.: you can use a sub-vector selector to choose
                    a particular column of infile, as in
                      -input 'fred.1D[3]'

  -inline DATA   = Read timeseries from command line DATA;
                     convolve with waveform to produce output
                     DATA is in the form of numbers and
                     count@value, as in
                     -inline 20@0.0 5@1.0 30@0.0 1.0 20@0.0 2.0
     which means a timeseries with 20 zeros, then 5 ones, then 30 zeros,
     a single 1, 20 more zeros, and a final 2.
     [The '@' character may actually be any of: '@', '*', 'x', 'X'.
      Note that * must be typed as \* to prevent the shell from
      trying to interpret it as a filename wildcard.]

  -tstim DATA    = Read discrete stimulation times from the command line
                     and convolve the waveform with delta-functions at
                     those times.  In this input format, the times do
                     NOT have to be at intervals of '-dt'.  For example
                       -dt 2.0 -tstim 5.6 9.3 13.7 16.4
                     specifies a TR of 2 s and stimuli at 4 times
                     (5.6 s, etc.) that do not correspond to integer
                     multiples of TR.  DATA values cannot be negative.
                   If the DATA is stored in a file, you can read it
                     onto the command line using something like
                       -tstim `cat filename`
                     where using the backward-single-quote operator
                     of the usual Unix shells.
   ** 12 May 2003: The times after '-tstim' can now also be specified
                     in the format 'a:b', indicating a continuous ON
                     period from time 'a' to time 'b'.  For example,
                       -dt 2.0 -tstim 13.2:15.7 20.3:25.3
                     The amplitude of a response of duration equal to
                     'dt' is equal the the amplitude of a single impulse
                     response (which is the special case a=b).  N.B.: This
                     means that something like '5:5.01' is very different
                     from '5' (='5:5').  The former will have a small amplitude
                     because of the small duration, but the latter will have
                     a large amplitude because the case of an instantaneous
                     input is special.  It is probably best NOT to mix the
                     two types of input to '-tstim' for this reason.
                     Compare the graphs from the 2 commands below:
                       waver -dt 1.0 -tstim 5:5.1 | 1dplot -stdin
                       waver -dt 1.0 -tstim 5     | 1dplot -stdin
                     If you prefer, you can use the form 'a%c' to indicate
                     an ON interval from time=a to time=a+c.
   ** 13 May 2005: You can now add an amplitude to each response individually.
                     For example
                       waver -dt 1.0 -peak 1.0 -tstim 3.2 17.9x2.0 23.1x-0.5
                     puts the default response amplitude at time 3.2,
                     2.0 times the default at time 17.9, and -0.5 times
                     the default at time 23.1.

  -when DATA     = Read time blocks when stimulus is 'on' (=1) from the
                     command line and convolve the waveform with with
                     a zero-one input.  For example:
                       -when 20..40 60..80
                     means that the stimulus function is 1.0 for time
                     steps number 20 to 40, and 60 to 80 (inclusive),
                     and zero otherwise.  (The first time step is
                     numbered 0.)

  -numout NN     = Output a timeseries with NN points; if this option
                     is not given, then enough points are output to
                     let the result tail back down to zero.

  -ver           = Output version information and exit.

* Only one of the 3 timeseries input options above can be used at a time.
* Using the AFNI program 1dplot, you can do something like the following,
  to check if the results make sense:
    waver -GAM -tstim 0 7.7 | 1dplot -stdin
* Note that program 3dDeconvolve can now generate many different
  waveforms internally, markedly reducing the need for this program.
* If a square wave is desired, see the 'sqwave' program.

++ Compile date = Mar 13 2009




AFNI program: whereami
Usage: whereami [x y z [output_format]] [-lpi/-spm] [-atlas ATLAS] 
   ++ Reports brain areas located at x y z mm in TLRC space according 
   to atlases present with your AFNI installation.
   ++ Show the contents of available atlases
   ++ Extract ROIs for certain atlas regions using symbolic notation
   ++ Report on the overlap of ROIs with Atlas-defined regions.

Options (all options are optional):
-----------------------------------
    x y z [output_format] : Specifies the x y z coordinates of the 
                            location probed. Coordinate are in mm and 
                            assumed to be in RAI or DICOM format, unless
                            otherwise specified (see -lpi/-spm below)
                            In the AFNI viewer, coordinate format is
                            specified above the coordinates in the top-left
                            of the AFNI controller. Right click in that spot
                            to change between RAI/DICOM and LPI/SPM.
                     NOTE I:In the output, the coordinates are reported
                            in LPI, in keeping with the convention used
                            in most publications.
                    NOTE II:To go between LPI and RAI, simply flip the 
                            sign of the X and Y coordinates.

                            Output_format is an optional flag where:
                            0 is for standard AFNI 'Where am I?' format.
                            1 is for Tab separated list, meant to be 
                            friendly for use in spreadsheets. 
                            The default output flag is 0. You can use
                            options -tab/-classic instead of the 0/1 flag.
 -coord_file XYZ.1D: Input coordinates are stored in file XYZ.1D
                     Use the '[ ]' column selectors to specify the
                     X,Y, and Z columns in XYZ.1D.
                     Say you ran the following 3dclust command:
           3dclust -1Dformat -1clip 0.3  5 3000 func+orig'[1]' > out.1D
                     You can run whereami on each cluster's center
                     of mass with:
           whereami -coord_file out.1D'[1,2,3]' -tab
               NOTE: You cannot use -coord_file AND specify x,y,z on
                     command line.
 -lpi/-spm: Input coordinates' orientation is in LPI or SPM format. 
 -rai/-dicom: Input coordinates' orientation is in RAI or DICOM format.
 NOTE: The default format for input coordinates' orientation is set by 
       AFNI_ORIENT environment variable. If it is not set, then the default 
       is RAI/DICOM
 -space SPC: Space of input coordinates.
       SPC can be either MNI or TLRC which is the default.
       If SPC is the MNI space, the x,y,z coordinates are transformed to
       TLRC space prior to whereami query.
 -classic: Classic output format (output_format = 0).
 -tab: Tab delimited output (output_format = 1). 
       Useful for spreadsheeting.
 -atlas ATLAS: Use atlas ATLAS for the query.
               You can use this option repeatedly to specify
               more than one atlas. Default is all available atlases.
               ATLAS is one of:
   TT_Daemon   : Created by tracing Talairach and Tournoux brain illustrations.
   Generously contributed by Jack Lancaster and Peter Fox of RIC UTHSCSA)

   CA_N27_MPM  : Anatomy Toolbox's atlases, some created from cytoarchitectonic 
   CA_N27_ML   : studies of 10 human post-mortem brains (CA_N27_MPM, CA_N27_PM). 
   CA_N27_PM   : Generously contributed by Simon Eickhoff,
   CA_N27_LR   : Katrin Amunts and Karl Zilles of IME, Julich, 
   Germany. Please take into account the references and abide by the 
   warning below (provided with the Anatomy toolbox) when using these 
   atlases:
   Anatomy Toolbox Reference and Warning:
   --------------------------------------
      ANATOMY TOOLBOX                                             
      Version 1.5                                                 
      written by:                                                 
         Simon Eickhoff  (s.eickhoff@fz-juelich.de)        
         Institut for Medicine (IME) Research Center Juelich
         Phone + 49 2461-61-5219 / Fax + 49 2461-61-2820   
      References:
         Eickhoff SB et al.: A new SPM toolbox for combining probabilistic
          cytoarchitectonic maps and functional imaging data. (2005)
          NeuroImage 25 (4): 1325-1335
         Eickhoff SB et al.: Testing anatomically specified hypotheses in
          functional imaging using cytoarchitectonic maps. (2006)
          NeuroImage 32 (2): 570-82
         Eickhoff SB et al., Assignment of functional activations to
          probabilistic cytoarchitectonic areas revisited. (2007)
          NeuroImage 26 (3): 511-521
      Publications describing the included cytoarchitectonic maps:
      
            ->  Morosan et al., NeuroImage 2001
      
            ->  Amunts et al., J Comp Neurol 1999
      
            ->  Geyer et al., Nature 1996
      
            ->  S. Geyer, Springer press 2003
      
            ->  Geyer et al., NeuroImage, 1999, 2000
      
            ->  Grefkes et al., NeuroImage 2001
      
            ->  Eickhoff et al., Cerebral Cortex 2006a,b
      
            ->  Amunts et al., Anat Embryol 2005    
      
            ->  Choi et al., J Comp Neurol 2006     
      
            ->  Amunts et al., NeuroImage 2000      
      
            ->  Malikovic et al., Cerebral Cortex 2006
      
            -> Burgel et al., NeuroImage 1999, 2006
      All other areas may only be used with authors' permission ! 
       
       
      AFNI adaptation by
       Ziad S. Saad (saadz@mail.nih.gov, SSCC/NIMH/NIH)
       Info automatically created with CA_EZ_Prep.m based on se_note.m
   
   See Eickhoff et al. Neuroimage 25 (2005) for more info on:
       Probability Maps (CA_N27_PM)
       and Maximum Probability Maps (CA_N27_MPM)
   ----------------------------------------------------------

 -atlas_sort: Sort results by atlas (default)
 -zone_sort | -radius_sort: Sort by radius of search
 -old : Run whereami in the olde (Pre Feb. 06) way.
 -show_atlas_code: Shows integer code to area label map of the atlases
                   in use. The output is not too pretty because
                   the option is for debugging use.
 -show_atlas_region REGION_CODE: You can now use symbolic notation to
                                 select atlas regions. REGION_CODE has 
                                 three colon-separated elements forming it:
            Atlas_Name:Side:Area.
      Atlas_Name: one of the atlas names listed above.
                  If you do not have a particular atlas in your AFNI
                  installation, you'll need to download it (see below).
      Side      : Either left, right or nothing(::) for bilateral.
      Area      : A string identifying an area. The string cannot contain
                  blanks. Replace blanks by '_' for example Cerebellar Vermis
                  is Cerebellar_Vermis. You can also use the abbreviated 
                  version cereb_ver and the program will try to guess at 
                  what you want and offer suggestions if it can't find the
                  area or if there is ambiguity. Abbreviations are formed
                  by truncating the components (chunks) of an area's name 
                  (label). For example:
               1- TT_Daemon::ant_cing specifies the bilateral
                  anterior cingulate in the TT_Daemon atlas.
               2- CA_N27_ML:left:hippo specifies the left
                  hippocampus in the CA_N27_ML atlas.
               3- CA_N27_MPM:right:124 specifies the right
                  ROI with integer code 124 in the CA_N27_MPM atlas
               4- CA_N27_ML::cereb_ver seeks the Cerebellar
                  Vermis in the CA_N27_ML atlas. However there
                  many distinct areas with this name so the program
                  will return with 'potential matches' or suggestions.
                  Use the suggestions to refine your query. For example:
                  CA_N27_ML::cereb_vermis_8
 -mask_atlas_region REGION_CODE: Same as -show_atlas_region, plus
                                 write out a mask dataset of the region.
 -prefix PREFIX: Prefix for the output mask dataset
 -max_areas MAX_N: Set a limit on the number of distinct areas to report.
             This option will override the value set by the environment
             variable AFNI_WHEREAMI_MAX_FIND, which is now set to 9
             The variable  AFNI_WHEREAMI_MAX_FIND should be set in your
             .afnirc file.
 -max_search_radius MAX_RAD: Set a limit on the maximum searching radius when
                     reporting results. This option will override the 
                     value set by the environment variable 
                     AFNI_WHEREAMI_MAX_SEARCH_RAD,
                     which is now set to 7.500000 .
 NOTE: You can turn off some of the whining by setting the environment 
       variable  AFNI_WHEREAMI_NO_WARN
 -debug DEBUG: Debug flag
 -CA_N27_version: Output the version of the Anatomy Toolbox atlases and quit.
                  If you get warnings that AFNI's version differs from that 
                  of the atlas' datasets then you will need to download the 
                  latest atlas datasets from AFNI's website. You cannot use 
                  older atlases because the atlas' integer-code to area-label
                  map changes from one version to the next.
                  To get the version of the atlas' datasets, run 3dNotes 
                  on the atlases and look for 'Version' in one of the notes
                  printed out.

Options for determining the percent overlap of ROIs with Atlas-defined areas:
---------------------------------------------------------------------------
 -bmask MASK: Report on the overlap of all non-zero voxels in MASK dataset
              with various atlas regions. NOTE: The mask itself is not binary,
              the masking operation results in a binary mask.
 -omask ORDERED_MASK:Report on the overlap of each ROI formed by an integral 
                     value in ORDERED_MASK. For example, if ORDERED_MASK has 
                     ROIs with values 1, 2, and 3, then you'll get three 
                     reports, one for each ROI value. Note that -omask and
                     -bmask are mutually exclusive.
 -cmask MASK_COMMAND: command for masking values in BINARY_MASK, 
                      or ORDERED_MASK on the fly.
        e.g. whereami -bmask JoeROIs+tlrc \
                      -cmask '-a JoeROIs+tlrc -expr equals(a,2)'
              Would set to 0, all voxels in JoeROIs that are not
              equal to 2.
        Note that this mask should form a single sub-brick,
        and must be at the same resolution as BINARY_MASK or ORDERED_MASK.
        This option follows the style of 3dmaskdump (since the
        code for it was, uh, borrowed from there (thanks Bob!, thanks Rick!)).
        See '3dmaskdump -help' for more information.

Note on the reported coordinates of the Focus Point:
----------------------------------------------------
  Coordinates of the Focus Point are reported in 3 coordinate spaces.
The 3 spaces are Talairach (TLRC), MNI, MNI Anatomical (MNI Anat.). 
All three coordinates are reported in the LPI coordinate order.
  The TLRC coordinates follow the convention specified by the Talairach and 
     Tournoux Atlas.
  The MNI coordinates are derived from the TLRC ones using an approximation 
     equation.
  The MNI Anat. coordinates are a shifted version of the MNI coordinates 
     (see Eickhoff et al. 05).

  However because the MNI coordinates reported here are derived from TLRC 
by an approximate function it is best to derive the MNI Anat. coordinates 
in a different manner. This option is possible because the MNI Anat. 
coordinates are defined relative to the single-subject N27 dataset. 
MNI Anat. coordinates are thus derived via the 12 piece-wise 
linear transformations used to put the MNI N27 brain in TLRC space.

Installing Atlases:
-------------------
   Atlases are stored as AFNI datasets, plus perhaps an extra file or two.
   These files should be placed in a location that AFNI can find. 
   Let us refer to this directory as ATLAS_DIR, usually it is the same as
   the directory in which AFNI's binaries (such as the program afni) reside.
   At a minimum, you need the TTatlas+tlrc dataset present to activate the 
   AFNI 'whereami' feature. To install it, if you do not have it already, 
   download TTatlas+tlrc* from this link: 
   http://afni.nimh.nih.gov/pub/dist/tgz/
   and move TTatlas+tlrc* to ATLAS_DIR.
   The Anatomy Toolbox atlases are in archives called CA_EZ_v*.tgz with *
   indicating a particular version number. Download the archive from:
   http://afni.nimh.nih.gov/pub/dist/tgz/, unpack it and move all the 
   files in the unpacked directory into ATLAS_DIR.

How To See Atlas Data In AFNI as datasets:
------------------------------------------
   If you want to view the atlases in the same session
   that you are working with, choose one of options below.
   For the sake of illustrations, I will assume that atlases
   reside in directory: /user/abin/
 1-Load the session where atlases reside on afni's command
   line: afni ./ /user/abin
 2-Set AFNI's environment variable AFNI_GLOBAL_SESSION
   to the directory where the atlases reside.
   You can add the following to you .afnirc file:
   AFNI_GLOBAL_SESSION = /user/abin
   Or, for a less permanent solution, you can set this environment
   variable in the shell you are working in with (for csh and tcsh):
   setenv AFNI_GLOBAL_SESSION /user/abin 
   ***********
   BE CAREFUL: Do not use the AFNI_GLOBAL_SESSION approach
   *********** if the data in your session is not already 
   written in +tlrc space. To be safe, you must have
   both +tlrc.HEAD and +tlrc.BRIK for all datasets
   in that session (directory). Otherwise, if the anat parents are
   not properly set, you can end up applying the +tlrc transform
   from one of the atlases instead of the proper anatomical 
   parent for that session.

   Note: You can safely ignore the:
              ** Can't find anat parent ....  
         messages for the Atlas datasets.

Convenient Colormaps For Atlas Datasets:
----------------------------------------
   New colormaps (colorscales) have been added
   to AFNI to facilitate integral valued datasets
   like ROIs and atlases. Here's what to do:
     o set the color map number chooser to '**' 
     o right-click on the color map and select 'Choose Colorscale'
     o pick one of: CytoArch_ROI_256, CytoArch_ROI_256_gap, ROI_32. etc.
     o set autorange off and set the range to the number of colors 
       in the chosen map (256, 32, etc.). 
       Color map CytoArch_ROI_256_gap was created for the proper viewing
       of the Maximum Probability Maps of the Anatomy Toolbox.

How To See Atlas regions overlaid in the AFNI GUI:
--------------------------------------------------
   To see specific atlas regions overlaid on underlay and other overlay data,
     1. In Overlay control panel, check "See TT Atlas Regions" 
     2. Switch view to Talairach in View Panel
     3. Right-click on image and select "-Atlas colors". In the Atlas colors
        menu, select the colors you would like and then choose Done.
     The images need to be redrawn to see the atlas regions, for instance,
        by changing slices. Additional help is available in the Atlas colors
        menu.
   For the renderer plug-in, the underlay and overlay datasets should both
     have Talairach view datasets actually written out to disk
   The whereami and "Talairach to" functions are also available by right-
     clicking in an image window.

Examples:
_________
   To find a cluster center close to the top of the brain at -12,-26, 76 (LPI),
   whereami, assuming the coordinates are in Talairach space, would report:
   > whereami -12 -26 76 -lpi
   > Focus point (LPI)= 
   -12 mm [L], -26 mm [P], 76 mm [S] {T-T Atlas}

   Atlas CA_N27_MPM: Cytoarch. Max. Prob. Maps (N27)
   Within 4 mm: Area 6
   Within 7 mm: Area 4a

   Atlas CA_N27_ML: Macro Labels (N27)
   Within 1 mm: Left Paracentral Lobule
   Within 6 mm: Left Precentral Gyrus
   -AND- Left Postcentral Gyrus

   To create a mask dataset of both the left and right amygdala, you can do the
   following (although masks and datasets can be specified in the same way for
   other afni commands, so a mask, very often, is not needed as a separate
   dataset):
   > whereami -prefix amymask -mask_atlas_region 'TT_Daemon::amygdala'

Questions Comments:
-------------------
   Ziad S. Saad   (saadz@mail.nih.gov)
   SSCC/NIMH/NIH/DHHS/USA

Thanks to Kristina Simonyan for feedback and testing.



++ Compile date = Mar 13 2009




AFNI program: whirlgif
whirlgif Rev 1.00 (C) 1996 by Kevin Kadow
                  (C) 1991,1992 by Mark Podlipec

whirlgif is a quick program that reads a series of GIF files, and produces
a single gif file composed of those images.

Usage: whirlgif [-v] [-trans index ] [-time delay] [-o outfile]
                [-loop] [-i incfile] file1 [ -time delay] file2

options:
   -v              verbose mode
   -loop [count]   add the Netscape 'loop' extension.
   -time delay     inter-frame timing.
   -trans index    set the colormap index 'index' to be transparent
   -o outfile      write the results to 'outfile'
   -i incfile      read a list of names from 'incfile'

TIPS

If you don't specify an output file, the GIF will be sent to stdout. This is
a good thing if you're using this in a CGI script, a very bad thing if you
run this from a terminal and forget to redirect stdout.

The output file (if any) and -loop _MUST_ be specified before any gif images.

You can specify several delay statements on the command line to change
the delay between images in the middle of an animation, e.g.

      whirlgif -time 5 a.gif b.gif c.gif -time 100 d.gif -time 5 e.gif f.gif

Although it's generally considered to be evil, you can also specify
several transparency statements on the command line, to change the transparent
color in the middle of an animation. This may cause problems for some programs.


BUGS
  + The loop 'count' is ineffective because Netspcape always loops infinitely.
  + Should be able to specify delay in an 'incfile' list (see next bug).
  + Does not handle filenames starting with a - (hypen), except in 'incfile'.

This program is available from http://www.msg.net/utility/whirlgif/
-------------------------------------------------------------------
Kevin Kadow     kadokev@msg.net
Based on 'txtmerge' written by:
Mark Podlipec   podlipec@wellfleet.com



AFNI program: xmat_tool.py

=============================================================================
xmat_tool.py    - a tool for evaluating an AFNI X-matrix

   This program gives the user the ability to evaluate a regression matrix
   (often referred to as an X-matrix).  With an AFNI X-matrix specified via
   -load_xmat, optionally along with an MRI time series specified via
   -load_1D, this program can display the:

         o  matrix condition numbers
         o  correlation matrix
         o  warnings regarding the correlation matrix
         o  cosine matrix (normalized XtX)
         o  warnings regarding the cosine matrix
         o  beta weights for fit against 1D time series
         o  fit time series

   --------------------------------------------------------------------------
   examples:

      Note that -no_gui is applied in each example, so that the program
      performs any requested actions and terminates, without opening a GUI
      (graphical user interface).

      0. Basic commands:

            xmat_tool.py -help
            xmat_tool.py -help_gui
            xmat_tool.py -hist
            xmat_tool.py -show_valid_opts
            xmat_tool.py -test
            xmat_tool.py -test_libs
            xmat_tool.py -ver

      1. Load an X-matrix and display the condition numbers.

            xmat_tool.py -no_gui -load_xmat X.xmat.1D -show_conds

      2. Load an X-matrix and display correlation and cosine warnings.

            xmat_tool.py -no_gui -load_xmat X.xmat.1D      \
                -show_cormat_warnings -show_cosmat_warnings

      3. Load an X-matrix and a 1D time series.  Display beta weights for
         the best fit to all regressors (specifed as columns 0 to the last).

            xmat_tool.py -no_gui -load_xmat X.xmat.1D -load_1D norm.ts.1D \
                -choose_cols '0..$' -show_fit_betas

      4. Similar to 3, but show the actual fit time series.  Also, redirect
         the output to save the results in a 1D file.

            xmat_tool.py -no_gui -load_xmat X.xmat.1D -load_1D norm.ts.1D \
                -choose_cols '0..$' -show_fit_ts > fitts.1D

      5. Show many things.  Load an X-matrix and time series, and display
         conditions and warnings (but setting own cutoff values), as well as
         fit betas.

            xmat_tool.py -no_gui -load_xmat X.xmat.1D -load_1D norm.ts.1D  \
                -choose_cols '0..$'                                        \
                -show_conds                                                \
                -cormat_cutoff 0.3 -cosmat_cutoff 0.25                     \
                -show_cormat_warnings -show_cosmat_warnings                \
                -show_fit_betas

      6. Script many operations.  Load a sequence of X-matrices, and display
         condition numbers and warnings for each.

         Note that with -chrono, options are applied chronologically.

            xmat_tool.py -no_gui -chrono                                \
                -load_xmat X.1.xmat.1D                                  \
                -show_conds -show_cormat_warnings -show_cosmat_warnings \
                -load_xmat X.2.xmat.1D                                  \
                -show_conds -show_cormat_warnings -show_cosmat_warnings \
                -load_xmat X.3.xmat.1D                                  \
                -show_conds -show_cormat_warnings -show_cosmat_warnings \
                -load_1D norm.ts.1D                                     \
                -show_fit_betas                                         \
                -choose_cols '0..$'                                     \
                -show_fit_betas                                         \
                -choose_cols '0..26,36..$'                              \
                -show_fit_betas                                         \
                -load_xmat X.2.xmat.1D                                  \
                -choose_cols '0..$'                                     \
                -show_fit_betas

   --------------------------------------------------------------------------
   basic informational options:

      -help                           : show this help
      -help_gui                       : show the GUI help
      -hist                           : show the module history
      -show_valid_opts                : show all valid options
      -test                           : run a basic test
                               (requires X.xmat.1D and norm.022_043_012.1D)
      -test_libs                      : test for required python libraries
      -ver                            : show the version number

   ------------------------------------------
   general options:

      -choose_cols 'COLUMN LIST'      : select columns to fit against

          e.g. -choose_cols '0..$'
          e.g. -choose_cols '1..19(3),26,29,40..$'

          These columns will be used as the basis for the top condition
          number, as well as the regressor columns for fit computations.

          The column selection string should not contain spaces, and should
          be in the format of AFNI sub-brick selection.  Consider these
          examples

              2..13           : 2,3,4,5,6,7,8,9,10,11,12,13
              2..13(3)        : 2,5,8,11
              3,7,11          : 3,7,11
              20..$(4)        : 20,24,28,32 (assuming 33 columns, say)

      -chrono                         : apply options chronologically

          By default, the general options are applied before the show
          options, with the show options being in order.

          When the -chrono option is applied, all options are chronological,
          allowing the options to be applied as in a script.

          For example, a matrix could be loaded, and then a series of fit
          betas could be displayed by alternating a sequence of -choose_cols
          and -show_fit_betas options.

          Consider example 6.

      -cormat_cutoff CUTOFF           : set min cutoff for cormat warnings

          e.g. -cormat_cutoff 0.5

          By default, any value in the correlation matrix that is greater
          than or equal to 0.4 generates a warning.  This option can be used
          to override that minumum cutoff.

      -cosmat_cutoff CUTOFF           : set min cutoff for cosmat warnings

          e.g. -cosmat_cutoff 0.5

          By default, any value in the cosine matrix that is greater than or
          equal to 0.3827 generates a warning.  This option can be used to
          override that minumum cutoff.

          Note a few cosine values, relative to 90 degrees (PI/2):

              cos(.50 *PI/2) = .707
              cos(.75 *PI/2) = .3827
              cos(.875*PI/2) = .195

      -cosmat_motion                  : include motion in cosmat warnings

          In the cosine matrix, motion regressors are often pointing in a
          direction close to that of either baseline or other motion
          regressors.  By default, such warnings are not displayed.

          Use this option to include all such warnings.

      -load_xmat XMAT.xmat.1D         : load the AFNI X-matrix

          e.g. -load_xmat X.xmat.1D

          Load the X-matrix, as the basis for most computations.

      -load_1D DATA.1D                : load the 1D time series

          e.g. -load_1D norm_ts.1D

          Load the 1D time series, for which fit betas and a fit time series
          can be generated.

      -no_gui                         : do not start the GUI

          By default, this program runs a graphical interface.  If the user
          wishes to perform some actions and terminate without starting the
          GUI, this option can be applied.

      -verb LEVEL                     : set the verbose level

          Specify how much extra text should be displayed regarding the
          internal operations.  Valid levels are currently 0..5, with 0
          meaning 'quiet', 1 being the default, and 5 being the most verbose.

 ------------------------------------------
 show options:

      -show_col_types                 : display columns by regressor types

          Show which columns are considered 'main', 'chosen', 'baseline'
          and 'motion'.  This would correspond to condition numbers.

      -show_conds                     : display a list of condition numbers

          The condition number is the ratio of the largest eigen value to
          the smallest.  It provides an indication of how sensitive results
          of linear regression are to small changes in the data.  Condition
          numbers will tend to be larger with regressors that are more highly
          correlated.

          This option requests to display condition numbers for the X-matrix,
          restricted to the given sets of columns (regressors):

              - all regressors
              - chosen regressors (if there are any)
              - main regressors (non-baseline, non-motion)
              - main + baseline (non-motion)
              - main + motion   (non-baseline)

              - motion + baseline
              - baseline
              - motion

      -show_cormat                    : show the correlation matrix

          Display the entire correlation matrix as text.

          For an N-regressor (N columns) matrix, the NxN correlation matrix
          has as its i,j entry the Pearson correlation between regressors
          i and j.  It is computed as the de-meaned, normalized XtX.

          Values near +/-1.0 are highly correlated (go up and down together,
          or in reverse).  A value of 0.0 would mean they are orthogonal.

      -show_cormat_warnings           : show correlation matrix warnings

          Correlations for regressor pairs that are highly correlated
          (abs(r) >= 0.4, say) are displayed, unless it is for a motion
          regressor with either another motion regressor or a baseline
          regressor.

      -show_cosmat                    : show the cosine matrix

          Display the entire cosine matrix as text.

          This is similar to the correlation matrix, but the values show the
          cosines of the angles between pairs of regressor vectors.  Values
          near 1 mean the regressors are "pointed in the same direction" (in
          M-dimensional space).  A value of 0 means they are at right angles,
          which is to say orthogonal.
         
      -show_cosmat_warnings           : show cosine matrix warnings

          Cosines for regressor pairs that are pointed similar directions
          (abs(cos) >= 0.3827, say) are displayed.

      -show_fit_betas                 : show fit betas

          If a 1D time series is specified, beta weights will be displayed as
          best fit parameters of the model (X-matrix) to the data (1D time
          series).  These values are the scalars by which the corresponding
          regressors are multiplied, in order to fit the data as closely as
          possibly (minimizing the sum of squared errors).

          Only chosen columns are fit to the data.

              see -choose_cols

      -show_fit_ts                    : show fit time series

          Similar to showing beta weights, the actual fit time series can
          be displayed with this option.  The fit time series is the sum of
          each regressor multiplied by its corresponding beta weight.

          Only chosen columns are fit to the data.

              see -choose_cols

      -show_xmat                      : display general X-matrix information

          This will display some general information that is stored in the
          .xmat.1D file.

      -show_1D                        : display general 1D information

          This will display some general information from the 1D time series
          file.

 ------------------------------------------
 GUI (graphical user interface) options:

      -gui_plot_xmat_as_one           : plot Xmat columns on single axis

-----------------------------------------------------------------------------
R Reynolds    October 2008
=============================================================================







AFNI README files (etc)


AFNI file: README.Ifile
Ifile: 

Program to read GE RT-EPI image files and divine their ordering
in time and space. Ifile also generates the command for @RenamePanga
to package the images into an AFNI brick.

Try one of the binaries Ifile_* or compile your own 

To compile:

Linux:
   cc -o Ifile -O2 Ifile.c -lm
SGI:
	gcc -o Ifile_Irix -O2 Ifile.c -lm

Solaris:
	gcc -o Ifile_Solaris Ifile.c -lm

For help on Ifile usage, execute Ifile with no arguments

@RenamePanga:
Script to package GE RT-EPI images into an AFNI brick.  


Robert W. Cox (rwcox@nih.gov) & Ziad S. Saad (ziad@nih.gov) SSCC/NIMH Dec. 10/01



AFNI file: README.atlas_building
README.atlas_building

Eickhoff Zilles Atlas building in AFNI

+ How you install a new Zilles, Amunts, Eickhoff SPM toolbox:
   1- Download the toolbox from: http://www.fz-juelich.de/ime/spm_anatomy_toolbox
   2- Unpack archive and move directory Anatomy_XXX to matlab's spm path (not necessary but nice should you want to use the toolbox in spm. on Eomer, v1.3b was placed here: /var/automount/Volumes/elrond0/home4/users/ziad/Programs/matlab/spm2/toolbox/Anatomy_13b
For each new atlas, rename Anatomy directory from .zip file to Anatomy_v??.
   3- Update the symbolic link Anatomy (under the same toolbox path above) to point to the latest Anatomy_XXX just created.
   4- Run the matlab function CA_EZ_Prep which will create new versions of thd_ttatlas_CA_EZ[.c,.h] to reflect new changes. The newly created files have the -auto added to their name for safety. Examine the files then move (remove the -auto) them to AFNI's src: eomer:/Users/ziad/AFNI/src. Also, the script creates the file thd_ttatlas_CA_EZ-ref.h to AFNI's src, it contains the references for the library and will be used by the script @Prep_New_CA_EZ below
Edit the CA_EZ_Prep program (in AFNI's source or wherever you have AFNI's MATLAB library installed) to look in the spm/toolbox/Anatomy folder you just created before running CA_EZ_Prep. The references are not parsed properly now resulting in an error, but can be manually edited in the ...ref.h file created.

Also program no longer creates the thd_ttatlas_CA_EZ-ref.h file and reports an error. Instead edit the existing source code file, adding any new references and updating the version numbers in the strings at the beginning and end. Match the array sizes to the array sizes in thd_ttatlas_CA_EZ.h. The reference lines must not be blank except for the last one. Fit lines so they will be displayed at 80 columns. The Pretty print function in whereami prints with an additional 6 spaces. All reference lines are shortened to include only a single -> rather than ---->. This makes formatting a bit trickier. Each line represents a single string in an array of strings, so each line requires a comma at the end; otherwise, the string wraps into the next line.

      + make cleanest
      + make vastness
   5- Now you need to create the AFNI TT versions of these datasets. Most of that is done from directory: eomer:/Users/ziad/AFNI_Templates_Atlases/ZILLES_N27_ATLASES.

      + First edit zver in @Prep_New_CA_EZ. Then run script @Prep_New_CA_EZ, which will create TT versions of the database. You should run afni in the new version's directory and check on the results. In particular, examine the TT_* results and check for alignment issues, etc.
Also change the orig_dir to the location of the Anatomy path used in step 2 and the reftxt variable to the path of your source. Copy @Shift_Volume script from afni source to something in your path like ~/abin/.
The environment variable must be set (in .afnirc or setenv in tcsh)
AFNI_ANALYZE_ORIGINATOR = YES

Oddly, the environment variable

AFNI_ANALYZE_VIEW = orig

must also be set, because 3dcopy is somehow assuming to copy to Talairach view without it when the ORIGINATOR variable is also set (despite a warning message to the contrary!!!), and a corresponding error is displayed when the script uses 3drefit to change from +orig to +tlrc because no +orig dataset exists. This isn't particularly important because we can just set the environment variable to go directly to Talairach in the script and assume no +orig anyway. I modified the script to use Talairach directly.


      + You might want at this point to run the script @Compare_CA_EZ after editing the few variables at the top. This script is not meant to catch errors, but it might altert you to funkyness, respect. In particular, watch for:
         ++ dset dimensional changes or origin shifts. If that happens, then that's bad news.
         ++ The anatomical N27 dset should be identical to the previous version. If that is not the case, there's a lot more work ahead because MNI<-->TLRC is based on this dataset and the TLRC surfaces are aligned to it. If N27 changes then you need to revisit directory N27, then N27_Surfaces before proceeding!
         ++ Look at the log file and diff directory created.

Minor bug in order of min and max in script. Note one can expect minor differences where region numbers change (in MNIa_N27_CA_EZ_MPM+tlrc).

      + If all looks hunky dori, you can now copy the new TT dsets to your abin directory for your viewing pleasure.
      + cp TT_N27_CA_EZ_MPM+tlrc.* TT_N27_CA_EZ_PMaps+tlrc* TT_N27_EZ_LR+tlrc* TT_N27_EZ_ML+tlrc* ~/abin
      + No need to copy TT_N27+tlrc* because that should not change.
   6- To distribute the atlases, run @DistArchives (after editing zdir) from  eomer:/Users/ziad/AFNI_Templates_Atlases/ an archive named: /Volumes/elrond0/var/www/html/pub/dist/tgz/CA_EZ_v1.3b.tgz (for version 1.3b) is put on AFNI's site (http://afni.nimh.nih.gov/pub/dist/tgz/CA_EZ_v1.3b.tgz)

Update @Create_ca_ez_tlrc.tgz script to point to the right src_path for the Atlases (/Users/dglen/AFNI_Templates_Atlases) and the right target for distribution (Web_dir = /Volumes/elrond0/var/www/html/pub/dist/tgz) depending on the naming of the mount point on your system.

Say No to creating new N27 datasets unless they have changed, and they probably won't.

Update via cvs, source code changes for thd_ttatlas_CA_EZ[.c,.h] and thd_ttatlas_CA_EZ-ref.h and any other changes made to whereami.c to add to the help.  Update scripts in cvs distribution too -  @Prep_New_CA_EZ, @Compare_CA_EZ, @DistArchives, @Create_ca_ez_tlrc.tgz, (@Create_suma_tlrc.tgz)/

Create or modify README.atlas_building that includes this documentation.



************* Add the gray matter files, and the Fibers to the scripts





I still need to figure out what to do with this. The fibers look like just another atlas with each region at a single value. Integrating standard atlases for now with distribution and cvs source.




AFNI file: README.attributes
Attributes in the AFNI Dataset Header
=====================================
Each attribute is an array of values.  There are three kinds of attributes
allowed: float, int, and string (array of char).  Each attribute has a
name, which by convention is all caps.  All the attributes are read in
at once when a dataset .HEAD file is opened.  The software searches for
the attributes it wants, by name, when it needs them.  Attributes that
are not wanted by the programs are thus simply ignored.  For example,
the HISTORY_NOTE attribute is only used by functions in the thd_notes.c
source file.

--------------------
Format of Attributes
--------------------
The format of attributes is a little clunky and non-robust, but that's
the way it is for now.  The .HEAD file structure was "designed" in 1994,
and has not changed at all since then.  Here is an example of an int
attribute in the .HEAD file:

type = integer-attribute
name = ORIENT_SPECIFIC
count = 3
 3 5 1

The first line of the attribute is the "type =" line, which can take
values "integer-attribute", "float-attribute", or "string-attribute".

The second line is the "name =" line; the name that follows must not
contain any blanks.

The third line is the "count =" line; the value that follows is the
number of entries in the attribute array.

These 3 lines are read with the code below:
  char aname[THD_MAX_NAME] , atypestr[THD_MAX_NAME] ;
  int  acount ;
  fscanf( header_file ,
          " type = %s name = %s count = %d" ,
          atypestr , aname , &acount ) ;
Recall that a blank in a format matches any amount of whitespace in the
input stream; for example, "name =" and "name   =" are both acceptable
second lines in an attribute (as are a number of other bizarre things
that are too painful to elucidate).

Following the third line is the list of values for the attribute array.
For float and int attributes, these values are separated by blanks
(or other C "whitespace").  If the .HEAD file is generated by an AFNI
program, then a maximum of 5 values per line will be written.  However,
this is not required -- it is just there to make the .HEAD file easy
to edit manually.

For string attributes, the entire array of "count" characters follows
on the fourth line, right after a single opening quote ' character.
For example:

type = string-attribute
name = TYPESTRING
count = 15
'3DIM_HEAD_ANAT~

Note that the starting ' is not part of the attribute value and is not
included in the count.  Also note that ASCII NUL characters '\0' are
replaced with tilde ~ characters when the header is written.  (This is
to make it easy to edit the file manually).  They will be replaced with
NULs (not to be confused with NULL) when the attribute is read in.
If a string actually contains a tilde, then the tilde will be replaced
with an asterisk * when the attribute is written out.  However, asterisks
will NOT be replaced with tildes on input -- that is, there is no way
for an attribute string to contain a tilde.

Some of the attributes described below may contain more array entries
in the .HEAD file than are listed.  These entries are "reserves" for
future expansion.  In most cases, the expansions never happened.

---------------------------------------
Extracting Attributes in a Shell Script
---------------------------------------
Program 3dAttribute can be used to extract attributes from a dataset
.HEAD file.  For example
   3dAttribute TYPESTRING anat+orig
might produce (on stdout) the value "3DIM_HEAD_ANAT".  This could be
captured in a shell variable and used to make some decisions.  For
usage details, type the command
   3dAttribute -help

--------------------
Mandatory Attributes
--------------------
All these attributes must be present for a dataset to be recognized from
a .HEAD file.

DATASET_RANK = Two values that determine the dimensionality of the
(int)          dataset:
                [0] = Number of spatial dimensions (must be 3)
                [1] = Number of sub-bricks in the dataset
                      (in most programs, this is called "nvals")
               At one time I thought I might extend AFNI to support
               n-dimensional datasets, but as time went one, I decided
               to support the fourth dimension not by increasing the
               "rank" of a dataset, but by adding the time axis instead.
               Thus, the dataset rank is always set to 3.

DATASET_DIMENSIONS = Three values that determine the size of each
(int)                spatial axis of the dataset:
                      [0] = number of voxels along the x-axis (nx)
                      [1] = number of voxels along the y-axis (ny)
                      [2] = number of voxels along the z-axis (nz)
                     The voxel with 3-index (i,j,k) in a sub-brick
                     is located at position (i+j*nx+k*nx*ny), for
                     i=0..nx-1, j=0..ny-1, k=0..nz-1.  Each axis must
                     have at least 2 points!

TYPESTRING = One of "3DIM_HEAD_ANAT" or "3DIM_HEAD_FUNC" or
(string)            "3DIM_GEN_ANAT"  or "3DIM_GEN_FUNC".
             Determines if the dataset is of Anat or Func type (grayscale
             underlay or color overlay).  If Anat type, and if it is a
             _HEAD_ dataset in the +orig view, then Talairach markers
             might be attached to it (if it was created by to3d).

SCENE_DATA = Three integer codes describing the dataset type
(int)         [0] = view type: 0=+orig, 1=+acpc, 2=+tlrc
              [1] = func type:
                    If dataset is Anat type, then this is one of the
                    following codes:
                      #define ANAT_SPGR_TYPE   0
                      #define ANAT_FSE_TYPE    1
                      #define ANAT_EPI_TYPE    2
                      #define ANAT_MRAN_TYPE   3
                      #define ANAT_CT_TYPE     4
                      #define ANAT_SPECT_TYPE  5
                      #define ANAT_PET_TYPE    6
                      #define ANAT_MRA_TYPE    7
                      #define ANAT_BMAP_TYPE   8
                      #define ANAT_DIFF_TYPE   9
                      #define ANAT_OMRI_TYPE   10
                      #define ANAT_BUCK_TYPE   11
                    At this time, Anat codes 0..10 are treated identically
                    by all AFNI programs.  Code 11 marks the dataset as a
                    "bucket" type, which is treated differently in the
                    display; the "Define Overlay" control panel will have a
                    chooser that allows you to specify which sub-brick from
                    the bucket should be used to make the underlay image.

                    If dataset is Func type, then this is one of the
                    following codes (Please modify @statauxcode if you
                    make additions or changes here):
                      #define FUNC_FIM_TYPE   0  /* 1 value           */
                      #define FUNC_THR_TYPE   1  /* obsolete          */
                      #define FUNC_COR_TYPE   2  /* fico: correlation */
                      #define FUNC_TT_TYPE    3  /* fitt: t-statistic */
                      #define FUNC_FT_TYPE    4  /* fift: F-statistic */
                      #define FUNC_ZT_TYPE    5  /* fizt: z-score     */
                      #define FUNC_CT_TYPE    6  /* fict: Chi squared */
                      #define FUNC_BT_TYPE    7  /* fibt: Beta stat   */
                      #define FUNC_BN_TYPE    8  /* fibn: Binomial    */
                      #define FUNC_GT_TYPE    9  /* figt: Gamma       */
                      #define FUNC_PT_TYPE    10 /* fipt: Poisson     */
                      #define FUNC_BUCK_TYPE  11 /* fbuc: bucket      */
                    These types are defined more fully in README.func_types.

                    Unfortunately, the func type codes overlap for Func
                    and Anat datasets.  This means that one cannot tell
                    the contents of a dataset from a single attribute.
                    However, this bad design choice (from 1994) is now
                    enshrined in the .HEAD files of thousands of datasets,
                    so it will be hard to change.

              [2] = 0 or 1 or 2 or 3, corresponding to the TYPESTRING
                    values given above.  If this value does not match the
                    typestring value, then the dataset is malformed and
                    AFNI will reject it!

ORIENT_SPECIFIC = Three integer codes describing the spatial orientation
(int)             of the dataset axes; [0] for the x-axis, [1] for the
                  y-axis, and [2] for the z-axis.  The possible codes are:
                    #define ORI_R2L_TYPE  0  /* Right to Left         */
                    #define ORI_L2R_TYPE  1  /* Left to Right         */
                    #define ORI_P2A_TYPE  2  /* Posterior to Anterior */
                    #define ORI_A2P_TYPE  3  /* Anterior to Posterior */
                    #define ORI_I2S_TYPE  4  /* Inferior to Superior  */
                    #define ORI_S2I_TYPE  5  /* Superior to Inferior  */
                  Note that these codes must make sense (e.g., they can't
                  all be 4).  Only program to3d enforces this restriction,
                  but if you create a nonsensical dataset, then bad things
                  will happen at some point.

                  Spatial xyz-coordinates in AFNI are sometimes used in
                  dataset order, which refers to the order given here.
                  They are also sometimes used in Dicom order, in which
                  x=R-L, y=A-P, and z=I-S (R,A,I are < 0; L,P,S are > 0).
                  There are utility functions for converting dataset
                  ordered 3-vectors to and from Dicom ordered 3-vectors
                  -- see the functions in file thd_coords.c.  Distances
                  in AFNI are always encoded in millimeters.

ORIGIN = Three numbers giving the xyz-coordinates of the center of
(float)  the (0,0,0) voxel in the dataset.  The order of these numbers
         is the same as the order of the xyz-axes (cf. ORIENT_SPECIFIC).
         However, the AFNI convention is that R-L, A-P, and I-S are
         negative-to-positive.  Thus, if the y-axis is P-A (say), then
         the y-origin is likely to be positive (and the y-delta, below,
         would be negative).  These numbers are usually computed from
         the centering controls in to3d.

DELTA = Three numbers giving the (x,y,z) voxel sizes, in the same order
        as ORIENT_SPECIFIC.  That is, [0] = x-delta, [1] = y-delta, and
        [2] = z-delta.  These values may be negative; in the example
        above, where the y-axis is P-A, then y-delta would be negative.
        The center of the (i,j,k) voxel is located at xyz-coordinates
        ORIGIN[0]+i*DELTA[0], ORIGIN[1]+j*DELTA[1], ORIGIN[2]+k*DELTA[2]

---------------------------------
Time-Dependent Dataset Attributes
---------------------------------
These attributes are mandatory if the .HEAD file describes a 3D+time
dataset.

TAXIS_NUMS = [0] = Number of points in time (at present, must be equal
(int)              to nvals=DATASET_RANK[1], or AFNI programs will not
                   be happy; that is, each time point can only have
                   a single numerical value per voxel).
             [1] = Number of slices with time offsets.  If zero, then
                   no slice-dependent time offsets are present (all slices
                   are presumed to be acquired at the same time).  If
                   positive, specifies the number of values to read
                   from TAXIS_OFFSETS.  Normally, this would either be 0
                   or be equal to DATASET_DIMENSIONS[2].
             [2] = Units codes for TAXIS_FLOATS[1]; one of the following
                     #define UNITS_MSEC_TYPE  77001  /* don't ask me */
                     #define UNITS_SEC_TYPE   77002  /* where these */
                     #define UNITS_HZ_TYPE    77003  /* came from! */

TAXIS_FLOATS = [0] = Time origin (in units given by TAXIS_NUMS[2]).
(float)              This is 0 in datasets created by to3d (at present).
               [1] = Time step (TR).
               [2] = Duration of acquisition.  This is 0 in datasets
                     created by to3d (at present)
               [3] = If TAXIS_NUMS[1] > 0, then this is the z-axis offset
                     for the slice-dependent time offsets.  This will
                     be equal to ORIGIN[2] in datasets created by to3d.c.
               [4] = If TAXIS_NUMS[1] > 0, then this is the z-axis step
                     for the slice-dependent time offsets.  This will
                     be equal to DELTA[2] in datasets created by to3d.c.

TAXIS_OFFSETS = If TAXIS_NUMS[1] > 0, then this array gives the time
(floats)        offsets of the slices defined by TAXIS_FLOATS[3..4].
                The time offset at
                  z = TAXIS_FLOATS[3] + k*TAXIS_FLOATS[4]
                is TAXIS_OFFSETS[k], for k=0..TAXIS_NUMS[1]-1.
                If TAXIS_NUMS[1] == 0, then this attribute is not used.

The functions in thd_timeof.c are used to compute the time for any given
voxel, taking into account the slice-dependent offsets.

---------------------------
Almost Mandatory Attributes
---------------------------
The following useful attributes are present in most AFNI datasets created
by AFNI package programs.  However, if they are not present, then the
function that assembles a dataset struct will get by.

IDCODE_STRING = 15 character string (plus NUL) giving a (hopefully)
(string)        unique identifier for the dataset, independent of the
                filename assigned by the user.  If this attribute is not
                present, the input routine will make one up for the
                dataset.  ID codes are used to provide links between
                datasets; see IDCODE_ANAT_PARENT for an example.
                (ID codes are generated in file thd_idcode.c.)

IDCODE_DATE = Maximum of 47 characters giving the creation date for
(string)      the dataset.  (Antedates the History Note, which contains
              the same information and more.)  Not used anywhere except
              in 3dinfo.

BYTEORDER_STRING = If this attribute is present, describes the byte-
(string)           ordering of the data in the .BRIK file.  Its value
                   must be one of the strings "LSB_FIRST" or "MSB_FIRST".
                   If this attribute is not present, AFNI will assume
                   that the brick is in the "native" order for the CPU
                   on which the program is running.  If this attribute
                   is present, and it is different from the native CPU
                   order, then short sub-bricks are 2-swapped (AB->BA)
                   and float or complex sub-bricks are 4-swapped
                   (ABCD->DCBA) when the .BRIK file is read into memory.

BRICK_STATS = There should be 2*nvals values here.  For the p-th
(float)       sub-brick, BRICK_STATS[2*p] is the minimum value stored
              in the brick, and BRICK_STATS[2*p+1] is the maximum value
              stored in the brick.  If the brick is scaled, then these
              values refer to the scaled values, NOT to the actual values
              stored in the .BRIK file.  Most AFNI programs create this
              attribute as they write the dataset to disk (e.g., by using
              the DSET_write macro, or by calling THD_load_statistics).
              The main function of this attribute is to provide the display
              of the dataset numerical ranges on the "Define Overlay"
              control panel.

BRICK_TYPES = There should be nvals=DATASET_RANK[1] values here.  For
(int)         the p-th sub-brick, BRICK_TYPES[p] is a code that tells
              the type of data stored in the .BRIK file for that
              sub-brick.  (Although it is possible to create a dataset
              that has varying sub-brick types, I do not recommend it.
              That is, I recommend that all BRICK_TYPE[p] values be
              the same.)  The legal types for AFNI datasets are
                0 = byte    (unsigned char; 1 byte)
                1 = short   (2 bytes, signed)
                3 = float   (4 bytes, assumed to be IEEE format)
                5 = complex (8 bytes: real+imaginary parts)
              Future versions of AFNI may support 2=int, 4=double, and
              6=rgb, or other extensions (but don't hold your breath).
              Relatively few AFNI programs support complex-valued
              datasets.  If this attribute is not present, then the
              sub-bricks will all be assumed to be shorts (which was
              the only datum type supported in AFNI 1.0).  The p-th
              sub-brick will have nx*ny*nz*sz bytes from the .BRIK file,
              where nx,ny,nz are from DATASET_DIMENSIONS and
              sz=sizeof(datum type).

BRICK_FLOAT_FACS = There should be nvals=DATASET_RANK[1] values here.  For
(float)            the p-th sub-brick, if f=BRICK_FLOAT_FACS[p] is positive,
                   then the values in the .BRIK should be scaled by f
                   to give their "true" values.  Normally, this would
                   only be used with byte or short types (to save disk
                   space), but it is legal to use f > 0 for float type
                   sub-bricks as well (although pointless and confusing).
                   If f==0, then the values are unscaled.  Possible uses
                   for f < 0 are reserved for the future.  If this
                   attribute is not present, then all brick factors are
                   taken to be 0 (i.e., no scaling).

BRICK_LABS = These are labels for the sub-bricks, and are used in the
(string)     choosers for sub-brick display when the dataset is a
             bucket type.  This attribute should contain nvals
             sub-strings, separated by NUL characters.  If this attribute
             is not present, then the input routine will make up some
             labels of the form "#0", "#1", etc.

BRICK_STATAUX = This stores auxiliary statistical information about
(float)         sub-bricks that contain statistical parameters.
                Each unit of this array contains the following
                  iv = sub-brick index  (0..nvals-1)
                  jv = statistical code (see below)
                  nv = number of parameters that follow (may be 0)
                  and then nv more numbers.
                That is, there are nv+3 numbers for each unit of this
                array, starting at location [0].  After the first
                unit is read out (from BRICK_STATAUX[0] up to
                BRICK_STATAUX[2+BRICK_STATAUX[2]]), then the next
                one starts immediately with the next value of iv.
                jv should be one of the 9 statistical types supported
                by AFNI, and described in README.func_types, and below:
           ------------- ----------------- ------------------------------
           Type Index=jv Distribution      Auxiliary Parameters [stataux]
           ------------- ----------------- ------------------------------
           FUNC_COR_TYPE Correlation Coeff # Samples, # Fit Param, # Orts
           FUNC_TT_TYPE  Student t         Degrees-of-Freedom (DOF)
           FUNC_FT_TYPE  F ratio           Numerator DOF, Denominator DOF
           FUNC_ZT_TYPE  Standard Normal   -- none --
           FUNC_CT_TYPE  Chi-Squared       DOF
           FUNC_BT_TYPE  Incomplete Beta   Parameters "a" and "b"
           FUNC_BN_TYPE  Binomial          # Trials, Probability per trial
           FUNC_GT_TYPE  Gamma             Shape, Scale
           FUNC_PT_TYPE  Poisson           Mean
                The main function of this attribute is to let the
                "Define Overlay" threshold slider show a p-value.
                This attribute also allows various other statistical
                calculations, such as the "-1zscore" option to 3dmerge.

STAT_AUX = The BRICK_STATAUX attribute allows you to attach statistical
(float)    distribution information to arbitrary sub-bricks of a bucket
           dataset.  The older STAT_AUX attribute is for the Func type
           datasets of the following types:
              fico = FUNC_COR_TYPE   fitt = FUNC_TT_TYPE
              fift = FUNC_FT_TYPE    fict = FUNC_CT_TYPE
              fibt = FUNC_BT_TYPE    fibn = FUNC_BN_TYPE
              figt = FUNC_GT_TYPE    fipt = FUNC_PT_TYPE
           These parameters apply to the second sub-brick (#1) of the
           dataset.  (Datasets of these types must have exactly 2
           sub-bricks.)  The number and definition of these parameters
           is the same as the BRICK_STATAUX cases, above.

----------------
Notes Attributes
----------------
Special characters in these strings are escaped.  For example, the
newline character is stored in the header as the two character
combination "\n", but will be displayed as a newline when the Notes
are printed (e.g., in 3dinfo).  The characters that are escaped are
    '\r'   '\n'   '\"'    '\t'   '\a'    '\v'    '\b'
     CR     LF     quote   TAB    BEL     VTAB    BS
For details, see function tross_Encode_String() in file thd_notes.c.

HISTORY_NOTE = A multi-line string giving the history of the dataset.
(string)       Can be read with 3dinfo, the Notes plugin, or 3dNotes.
               Written functions in thd_notes.c, including
                tross_Copy_History: copies dataset histories
                tross_Make_History: adds a history line from argc,argv

NOTES_COUNT = The number of auxiliary notes attached to the dataset
(int)         (from 0 to 999).

NOTE_NUMBER_001 = The first auxiliary note attached to the dataset.
(string)          Can be read/written with the Notes plugin, or 3dNotes.
                  (You have to guess what the attribute name for the
                  237th Note will be.)

-----------------------
Registration Attributes
-----------------------
Note that the MATVEC attributes are transformations of Dicom-ordered
coordinates, and so have to be permuted to transform dataset-ordered
xyz-coordinates.  The MATVEC attributes describe the transformation
of coordinates from input dataset to the output dataset in the form
   [xyz_out] = [mat] ([xyz_in]-[xyz_cen]) + [vec] + [xyz_cen]
where
   [mat]     is a 3x3 orthogonal matrix;
   [vec]     is a 3-vector;
   [xyz_in]  is the input vector;
   [xyz_cen] is the center of rotation (usually the center of the dataset);
   [xyz_out] is the output vector.
Dicom coordinate order is used for these matrices and vectors, which
means that they need to be permuted to dataset order for application.
For examples of how this is done, see 3drotate.c and 3dvolreg.c.

TAGALIGN_MATVEC = 12 numbers giving the 3x3 matrix and 3-vector of the
(float)           transformation derived in 3dTagalign.  The matrix-vector
                  are loaded from the following elements of the attribute:
                            [ 0 1  2 ]           [  3 ]
                    [mat] = [ 4 5  6 ]   [vec] = [  7 ]
                            [ 8 9 10 ]           [ 11 ]
                  This is used by 3drotate with the -matvec_dset option,
                  and is written by 3dTagalign.

VOLREG_MATVEC_xxxxxx = For sub-brick #xxxxxx (so a max of 999,999
(float)                sub-bricks can be used), this stores the 12 numbers
                       for the matrix-vector of the transformation from
                       3dvolreg.  This is used by the -rotparent options
                       of 3drotate and 3dvolreg, and is written into the
                       output dataset of 3dvolreg.  The value of xxxxxx
                       is printf("%06d",k) for k=0..VOLREG_ROTCOM_NUM-1.

VOLREG_ROTCOM_xxxxxx = The -rotate/-ashift options to 3drotate that are
(string)               equivalent to the above matrix-vector transformation.
                       It is not actually used anywhere, but is there for
                       reference.

VOLREG_CENTER_OLD = The xyz-coordinates (Dicom order) of the center of
(float)             the input dataset to 3dvolreg; this is written to
                    3dvolreg's output dataset, and is used by the
                    -rotparent options to 3dvolreg and 3drotate.

VOLREG_CENTER_BASE = The xyz-coordinates (Dicom order) of the center
                     of the base dataset to 3dvolreg; this is written
                     to 3dvolreg's output dataset, and is used by the
                     -rotparent options to 3dvolreg and 3drotate.

VOLREG_ROTPARENT_IDCODE = If a 3dvolreg run uses the -rotparent option,
(string)                  then this value in the header of the output
                          dataset tells which dataset was the rotparent.

VOLREG_ROTPARENT_NAME = The .HEAD filename of the -rotparent.
(string)

VOLREG_GRIDPARENT_IDCODE = Similar to the above, but for a 3dvolreg
(string)                   output dataset that was created using a
                           -gridparent option.

VOLREG_GRIDPARENT_NAME = The .HEAD filename of the -gridparent.
(string)

VOLREG_INPUT_IDCODE = In the 3dvolreg output dataset header, this
(string)              tells which dataset was the input to 3dvolreg.

VOLREG_INPUT_NAME = The .HEAD filename of the 3dvolreg input dataset.
(string)

VOLREG_BASE_IDCODE = In the 3dvolreg output dataset header, this
(string)             tells which dataset was the base for registration.

VOLREG_BASE_NAME = The .HEAD filename of the 3dvolreg base dataset.
(string)

VOLREG_ROTCOM_NUM = The single value in here tells how many sub-bricks
(int)               were registered by 3dvolreg.  (The only reason this
                    might be different than nvals is that someone might
                    later tack extra sub-bricks onto this dataset using
                    3dTcat.)  This is how many VOLREG_MATVEC_xxxxx and
                    VOLREG_ROTCOM_xxxxxx attributes are present in the
                    dataset.

------------------------
Miscellaneous Attributes
------------------------
IDCODE_ANAT_PARENT = ID code for the "anatomy parent" of this dataset
(string)             (if it has one).

TO3D_ZPAD = 3 integers specifying how much zero-padding to3d applied
(int)       when it created the dataset (x,y,z axes).  At this time,
            only the [2] component could be nonzero.  If this attribute
            is not present, then no zero-padding was done by to3d.

------------------
Warping Attributes
------------------
IDCODE_WARP_PARENT = ID code for the "warp parent" of this dataset
(string)             (if it has one).  This will normally be a dataset
                     in the +orig view, even for datasets transformed
                     from +acpc to +tlrc.  That is, the transformation
                     chain +orig to +acpc to +tlrc is symbolic; when
                     you transform a dataset from +acpc to +tlrc, AFNI
                     catenates that transformation onto the +orig to
                     +acpc transformation and stores the result, which
                     is the direct transformation from +orig to +tlrc.

WARP_TYPE = [0] = Integer code describing the type of warp:
(int)               #define WARP_AFFINE_TYPE        0
                    #define WARP_TALAIRACH_12_TYPE  1
            [1] = No longer used (was the resampling type, but that
                  is now set separately by the user).

WARP_DATA = Data that define the transformation from the warp parent
(float)     to the current dataset.  Each basic linear transformation
            (BLT) takes 30 numbers.  For WARP_AFFINE_TYPE, there is one
            BLT per warp; for WARP_TALAIRACH_12_TYPE, there are 12 BLTs
            per warp.  Thus, for WARP_AFFINE_TYPE there should be 30
            numbers in WARP_DATA, and for WARP_TALAIRACH_12_TYPE there
            should be 360 numbers.  (WARP_AFFINE_TYPE is used for the
            +orig to +acpc transformation; WARP_TALAIRACH_12_TYPE is
            used for the +orig to +tlrc transformation - duh.)

Each BLT is defined by a struct that contains two 3x3 matrices and four
3-vectors (2*3*3+4*3 = the 30 numbers).  These values are:

 [mfor] = 3x3 forward transformation matrix    [0..8]   } range of
 [mbac] = 3x3 backward transformation matrix   [9..17]  } indexes
 [bvec] = 3-vector for forward transformation  [18..20] } in the
 [svec] = 3-vector for backward transformation [21..23] } WARP_DATA
 [bot]  } two more 3-vectors that              [24..26] } BLT
 [top]  } are described below                  [27..29] } array

(the matrices are stored in row-major order; e.g.,
                [ 0 1 2 ]
       [mfor] = [ 3 4 5 ]
                [ 6 7 8 ] -- the indices of the [mfor] matrix).

The forward transformation is  [x_map] = [mfor] [x_in]  - [bvec];
The backward transformation is [x_in]  = [mbac] [x_map] - [svec]
(which implies [svec] = -[mbac] [bvec] and [mbac] = Inverse{[mfor]}).

The forward transformation is the transformation of Dicom order
coordinates from the warp parent dataset (usually in the +orig view)
to the warped dataset (usually +acpc or +tlrc).  The backward
transformation is just the inverse of the forward transformation, and
is stored for convenience (it could be recomputed from the forward
transformation whenever it was needed, but that would be too much
like work).  The identity BLT would be stored as these 30 numbers:
      1 0 0          }
      0 1 0          } [mfor] = I
      0 0 1          }
      1 0 0          }
      0 1 0          } [mbac] = I
      0 0 1          }
      0 0 0          } [bvec] = 0
      0 0 0          } [svec] = 0
      botx boty botz } these numbers are described below,
      topx topy topz } and depend on the application.

If the transformation is WARP_TALAIRACH_12_TYPE, then each BLT only
applies to a bounded region of 3-space.  The [bot] and [top] vectors
define the limits for each BLT, in the warped [x_map] coordinates.
These values are used in the function AFNI_transform_vector() to
compute the transformation of a 3-vector between +orig and +tlrc
coordinates.  For example, to compute the transformation from +tlrc
back to +orig of a vector [x_tlrc], the code must scan all 12
[bot]..[top] regions to see which BLT to use.  Similarly, to transform
[x_orig] from +orig to +tlrc, the vector must be transformed with
each BLT and then the result tested to see if it lies within the BLT's
[bot]..[top] region.  (If a lower bound is supposed to be -infinity,
then that element of [bot] is -9999; if an upper bound is supposed to
be +infinity, then that element of [top] is +9999 -- there is an
implicit assumption that AFNI won't be applied to species with heads
more than 10 meters in size.)

For the +orig to +acpc transformation (of WARP_AFFINE_TYPE), the [bot]
and [top] vectors store the bounding box of the transformed dataset.
However, this fact isn't used much (only when the new dataset is created
when the user presses the "Define Markers->Transform Data" button, which
is when the +acpc.HEAD file would be created).  If you were to manually
edit the +acpc.HEAD file and change [bot] and [top], nothing would happen.
This is not true for a +tlrc.HEAD file, since the [bot] and [top] vectors
actually mean something for WARP_TALAIRACH_12_TYPE.

----------------------------
Talairach Markers Attributes
----------------------------
These are used to define the transformations from +orig to +acpc
coordinates, and from +acpc to +tlrc.  If they are present, then opening
the "Define Markers" panel in AFNI will show a list of the markers and
let you edit their locations.  MARKSET_ALIGN (+orig to +acpc) markers are
attached to 3DIM_HEAD_ANAT +orig datasets created by to3d (if there is
no time axis).  An empty set of such markers can also be attached to such
datasets using the "-markers" option to 3drefit.  (The label and help
strings for the 2 types of marker sets are defined in 3ddata.h.)

MARKS_XYZ = 30 values giving the xyz-coordinates (Dicom order) of
(float)     the markers for this dataset.  (A maximum of 10 markers
            can be defined for a dataset.)  MARKS_XYZ[0] = x0,
            MARKS_XYZ[1] = y0, MARKS_XYZ[2] = z0, MARKS_XYZ[3] = x1,
            etc.  If a marker's xyz-coordinates are outside the
            bounding box of the dataset, it is considered not to
            be set.  For this purpose, the bounding box of the dataset
            extends to the edges of the outermost voxels (not just their
            centers).

MARKS_LAB = 200 characters giving the labels for the markers (20 chars
(string)    per marker, EXACTLY, including the NULs).  A marker whose
            string is empty (all NUL characters) will not be defined
            or shown by AFNI.

MARKS_HELP = 2560 characters giving the help strings for the markers
(string)     (256 chars per marker, EXACTLY, including the NULs).

MARKS_FLAGS = [0] = Type of markers; one of the following:
(int)                 #define MARKSET_ALIGN    1 /* +orig to +acpc */
                      #define MARKSET_BOUNDING 2 /* +acpc to +tlrc */
              [1] = This should always be 1 (it is an "action code",
                    but the only action ever defined was warping).

--------------------------------
Attributes for User-Defined Tags
--------------------------------
These tags defined and set by plug_tag.c; their original purpose was
to aid in 3D alignment by having the user mark homologous points that
would then be aligned with 3dTagalign.  This application has pretty
much been superseded with the advent of "3dvolreg -twopass" (but you
never know, do you?).

TAGSET_NUM = [0] = ntag = number of tags defined in the dataset (max=100)
(int)        [1] = nfper = number of floats stored per tag (should be 5)

TAGSET_FLOATS = ntag*nfper values; for tag #i:
(float)          [nfper*i+0] = x-coordinate (Dicom order)
                 [nfper*i+1] = y-coordinate (Dicom order)
                 [nfper*i+2] = z-coordinate (Dicom order)
                 [nfper*i+3] = tag numerical value
                 [nfper*i+4] = sub-brick index of tag (if >= 0),
                               or "not set" flag (if < 0)

TAGSET_LABELS = ntag sub-strings (separated by NULs) with the labels
(string)        for each tag.

-------------------------
Nearly Useless Attributes
-------------------------
These attributes are leftovers from the early days of AFNI, but never
became useful for anything.

LABEL_1 = A short label describing the dataset.
(string)

LABEL_2 = Another short label describing the dataset.
(string)

DATASET_NAME = A longer name describing the dataset contents.
(string)

DATASET_KEYWORDS = List of keywords for this dataset.  By convention,
(string)           keywords are separated by " ; ".  (However, no
                   program at this time uses the keywords or this
                   convention!)

BRICK_KEYWORDS = List of keywords for each sub-brick of the dataset.
(string)         Should contain nvals sub-strings (separated by NULs).
                 Again, by convention, separate keywords for the same
                 sub-brick would be separated by " ; " within the
                 sub-brick's keyword string.

--------------------------
Programming Considerations
--------------------------
When a new dataset is created, it is usually made with one of the library
functions EDIT_empty_copy() or EDIT_full_copy().  These make a copy of a
dataset struct in memory.  They do NOT preserve attributes.  Various struct
elements will be translated to attributes when the dataset is written to
disk (see thd_writedset.c), but other attributes in the "parent" dataset
are not automatically copied.  This means that if you attach some extra
information to a dataset in a plugin using an attribute, say, and write
it out using the DSET_write_header() macro, that information will not be
preserved in "descendants" of that dataset.  For example, if you did
  3dcalc -a old+orig -expr "a" -prefix new
then any plugin-defined attributes attached to old+orig.HEAD will not be
reproduced in new+orig.HEAD.  (In fact, this would be a good way to see
exactly what attributes are generated by AFNI.)

==============================================================================
                  Accessing Dataset Elements in a C Program 
==============================================================================
Suppose you know the name of a dataset, and want to read some information
about it in your C program.  Parsing the dataset .HEAD file, as described
above, would be tedious and subject to change.  The "libmri.a" library
(header file "mrilib.h") compiled with AFNI has functions that will do
this stuff for you.  The code to open a dataset file, read all its header
information, and return an "empty" (unpopulated with volumetric data)
dataset is like so:

   THD_3dim_dataset *dset ;
   dset = THD_open_dataset( "fred+orig.HEAD" ) ;
   if( dset == NULL ){ fprintf(stderr,"My bad.\n"); exit(1); }

At this point, "dset" points to the complicated and ever-growing struct
type that comprises an AFNI dataset (defined in "3ddata.h", which is
included by "mrilib.h").  Rather than access the elements of this struct
yourself, there is a large number of macros to do this for you.  Some of
these are documented below.

Macros to Query the Status of a Dataset
---------------------------------------
These macros return 1 if the dataset satisfies some condition, and return
0 if it doesn't.  Here, the input "ds" is of type "THD_3dim_dataset *":

DSET_ONDISK(ds)      returns 1 if the dataset actually has data on disk
DSET_IS_BRIK(ds)     returns 1 if the dataset actually has a .BRIK file
DSET_IS_MINC(ds)     returns 1 if the dataset is from a MINC file
ISFUNC(ds)           returns 1 if the dataset is a functional type
ISANAT(ds)           returns 1 if the dataset is an anatomical type
ISFUNCBUCKET(ds)     returns 1 if the dataset is a functional bucket
ISANATBUCKET(ds)     returns 1 if the dataset is an anatomical bucket
ISBUCKET(ds)         returns 1 if the dataset is either type of bucket
DSET_COMPRESSED(ds)  returns 1 if the dataset .BRIK file is compressed
DSET_LOADED(ds)      returns 1 if the dataset .BRIK file has been loaded
                     into memory via macro DSET_load()

Macros to Query Information About Dataset Geometry
--------------------------------------------------
DSET_NVALS(ds)  returns the number of sub-bricks in the dataset
DSET_NVOX(ds)   returns the number of voxels in one sub-brick
DSET_NX(ds)     returns the x-axis grid array dimension
DSET_NY(ds)     returns the y-axis grid array dimension
DSET_NZ(ds)     returns the z-axis grid array dimension
DSET_DX(ds)     returns the x-axis grid spacing (in mm)
DSET_DY(ds)     returns the y-axis grid spacing (in mm)
DSET_DZ(ds)     returns the z-axis grid spacing (in mm)
DSET_XORG(ds)   returns the x-axis grid origin (in mm)
DSET_YORG(ds)   returns the y-axis grid origin (in mm)
DSET_ZORG(ds)   returns the z-axis grid origin (in mm)

Along the x-axis, voxel index #i is at x = DSET_XORG(ds)+i*DSET_DX(ds),
for i = 0 .. DSET_NX(ds)-1.  Similar remarks apply to the y- and z-axes.
Note that DSET_DX(ds) (etc.) may be negative.

DSET_CUBICAL(ds) returns 1 if the dataset voxels are cubical,
                 returns 0 if they are not

The following macros may be useful for converting from 1D indexes (q)
into the sub-brick arrays to 3D spatially relevant indexes (i,j,k):

DSET_index_to_ix(ds,q) returns the value of i that corresponds to q
DSET_index_to_jy(ds,q) returns the value of j that corresponds to q
DSET_index_to_kz(ds,q) returns the value of k that corresponds to q

DSET_ixyz_to_index(ds,i,j,k) returns the q that corresponds to (i,j,k)

Macros to Query Information about the Dataset Time Axis
-------------------------------------------------------
DSET_TIMESTEP(ds)  returns the TR; if 0 is returned, there is no time axis
DSET_NUM_TIMES(ds) returns the number of points along the time axis;
                   if 1 is returned, there is no time axis

Macros to Query Information About Dataset Sub-Brick Contents
------------------------------------------------------------
DSET_BRICK_TYPE(ds,i) returns a code indicating the type of data stored
                      in the i-th sub-brick of the dataset; the type
                      codes are defined in "mrilib.h" (e.g., MRI_short)

DSET_BRICK_FACTOR(ds,i) returns the float scale factor for the data in
                        the i-th sub-brick of the dataset; if 0.0 is
                        returned, then don't scale this data, otherwise
                        each value should be scaled by this factor before
                        being used

DSET_BRICK_BYTES(ds,i) returns the number of bytes used to store the
                       data in the i-th sub-brick of the dataset

DSET_BRICK_LABEL(ds,i) returns a pointer to the string label for the
                       i-th sub-brick of the dataset

DSET_BRICK_STATCODE(ds,i) returns an integer code for the type of statistic
                          stored in the i-th sub-brick of the dataset
                          (e.g., FUNC_FT_TYPE for an F-test statistic);
                          returns -1 if this isn't a statistic sub-brick

DSET_BRICK_STATAUX(ds,i) returns a pointer to a float array holding the
                         auxiliary statistical parameters for the i-th
                         sub-brick of the dataset; returns NULL if this
                         isn't a statistic sub-brick

DSET_BRICK_STATPAR(ds,i,j) returns the float value of the j-th auxiliary
                           statistical parameter of the i-th sub-brick of
                           the dataset; returns 0.0 if this isn't a
                           statistic sub-brick

DSET_BRICK_ARRAY(ds,i) returns a pointer to the data array for the i-th
                       sub-brick of the dataset; returns NULL if the
                       dataset .BRIK wasn't loaded into memory yet
                       via the macro DSET_load()

Macros to Query Information about Dataset Filenames (etc.)
----------------------------------------------------------
DSET_PREFIX(ds)       returns a pointer to the dataset's prefix string
DSET_FILECODE(ds)     returns a pointer to the dataset's prefix+view string
DSET_HEADNAME(ds)     returns a pointer to the dataset's .HEAD filename string
DSET_BRIKNAME(ds)     returns a pointer to the dataset's .BRIK filename string
DSET_DIRNAME(ds)      returns a pointer to the dataset's directory name string
DSET_IDCODE(ds)->str  returns a pointer to the dataset's unique ID code string
DSET_IDCODE(ds)->date returns a pointer to the dataset's date of creation
EQUIV_DSETS(ds1,ds2)  returns 1 if the two datasets have same ID code string

Macros to Do Something with the Dataset
---------------------------------------
DSET_load(ds)    reads the dataset .BRIK file into memory (if it is already
                 loaded, it does nothing)

DSET_unload(ds)  purges the dataset sub-brick arrays from memory (but the
                 dataset struct itself is there, ready to be reloaded)

DSET_delete(ds)  purges the dataset sub-brick arrays from memory, then
                 destroys the dataset struct itself as well

DSET_mallocize(ds) forces the memory for the dataset to be allocated with
                   malloc(), rather than possibly allowing mmap(); this
                   macro should be used before DSET_load(); you CANNOT write
                   into a mmap()-ed dataset's arrays, so if you are altering
                   a dataset in-place, it must be mallocize-d!

DSET_write(ds)   writes a dataset (.HEAD and .BRIK) to disk; AFNI can't write
                 MINC formatted datasets to disk, so don't try

Important Dataset Fields without Macros
---------------------------------------
ds->daxes->xxorient  gives the orientation of the x-axis in space; this will
                     be one of the following int codes:
                       #define ORI_R2L_TYPE  0  /* Right-to-Left */
                       #define ORI_L2R_TYPE  1  /* Left-to-Right */
                       #define ORI_P2A_TYPE  2  /* Posterior-to-Anterior */
                       #define ORI_A2P_TYPE  3  /* Anterior-to-Posterior */
                       #define ORI_I2S_TYPE  4  /* Inferior-to-Superior */
                       #define ORI_S2I_TYPE  5  /* Superior-to-Inferior */

ds->daxes->yyorient  gives the orientation of the y-axis in space
ds->daxes->zzorient  gives the orientation of the z-axis in space

Functions to Access Attributes
------------------------------
Most attributes are loaded into dataset struct fields when a dataset is
opened with THD_open_dataset().  To access the attributes directly, you
can use the following functions:

ATR_float  *afl = THD_find_float_atr ( dset->dblk , "attribute_name" ) ;
ATR_int    *ain = THD_find_int_atr   ( dset->dblk , "attribute_name" ) ;
ATR_string *ast = THD_find_string_atr( dset->dblk , "attribute_name" ) ;

The ATR_ structs are typedef-ed in 3ddata.h (included by mrilib.h).
Cut directly from the living code:

typedef struct {
      int     type ;  /*!< should be ATR_FLOAT_TYPE */
      char *  name ;  /*!< name of attribute, read from HEAD file */
      int     nfl ;   /*!< number of floats stored here */
      float * fl ;    /*!< array of floats stored here */
} ATR_float ;

You can access the attribute values with afl->fl[i], for i=0..atr->nfl-1.
This functionality is used in 3dvolreg.c, for example, to access the
attributes whose name start with "VOLREG_".

====================================
Robert W Cox, PhD
National Institute of Mental Health
====================================



AFNI file: README.bzip2
The following is the README, man page, and LICENSE files for the bzip2
utility, which is included in the AFNI package.  The home page for
bzip2 is http://www.muraroa.demon.co.uk/ , where the entire bzip2
distribution can be found.

This program is included to allow compressed dataset .BRIK files to be
used with AFNI.  See the file README.compression for more information.
Note that bzip2 usually compresses more than gzip or compress, but is
much slower.
=========================================================================
GREETINGS!

   This is the README for bzip2, my block-sorting file compressor,
   version 0.1.  

   bzip2 is distributed under the GNU General Public License version 2;
   for details, see the file LICENSE.  Pointers to the algorithms used
   are in ALGORITHMS.  Instructions for use are in bzip2.1.preformatted.

   Please read all of this file carefully.

HOW TO BUILD

   -- for UNIX:

        Type `make'.     (tough, huh? :-)

        This creates binaries "bzip2", and "bunzip2",
        which is a symbolic link to "bzip2".

        It also runs four compress-decompress tests to make sure
        things are working properly.  If all goes well, you should be up &
        running.  Please be sure to read the output from `make'
        just to be sure that the tests went ok.

        To install bzip2 properly:

           -- Copy the binary "bzip2" to a publically visible place,
              possibly /usr/bin, /usr/common/bin or /usr/local/bin.

           -- In that directory, make "bunzip2" be a symbolic link
              to "bzip2".

           -- Copy the manual page, bzip2.1, to the relevant place.
              Probably the right place is /usr/man/man1/.
   
   -- for Windows 95 and NT: 

        For a start, do you *really* want to recompile bzip2?  
        The standard distribution includes a pre-compiled version
        for Windows 95 and NT, `bzip2.exe'.

        This executable was created with Jacob Navia's excellent
        port to Win32 of Chris Fraser & David Hanson's excellent
        ANSI C compiler, "lcc".  You can get to it at the pages
        of the CS department of Princeton University, 
        www.cs.princeton.edu.  
        I have not tried to compile this version of bzip2 with
        a commercial C compiler such as MS Visual C, as I don't
        have one available.

        Note that lcc is designed primarily to be portable and
        fast.  Code quality is a secondary aim, so bzip2.exe
        runs perhaps 40% slower than it could if compiled with
        a good optimising compiler.

        I compiled a previous version of bzip (0.21) with Borland
        C 5.0, which worked fine, and with MS VC++ 2.0, which
        didn't.  Here is an comment from the README for bzip-0.21.

           MS VC++ 2.0's optimising compiler has a bug which, at 
           maximum optimisation, gives an executable which produces 
           garbage compressed files.  Proceed with caution. 
           I do not know whether or not this happens with later 
           versions of VC++.

           Edit the defines starting at line 86 of bzip.c to 
           select your platform/compiler combination, and then compile.
           Then check that the resulting executable (assumed to be 
           called bzip.exe) works correctly, using the SELFTEST.BAT file.  
           Bearing in mind the previous paragraph, the self-test is
           important.

        Note that the defines which bzip-0.21 had, to support 
        compilation with VC 2.0 and BC 5.0, are gone.  Windows
        is not my preferred operating system, and I am, for the
        moment, content with the modestly fast executable created
        by lcc-win32.

   A manual page is supplied, unformatted (bzip2.1),
   preformatted (bzip2.1.preformatted), and preformatted
   and sanitised for MS-DOS (bzip2.txt).

COMPILATION NOTES

   bzip2 should work on any 32 or 64-bit machine.  It is known to work
   [meaning: it has compiled and passed self-tests] on the 
   following platform-os combinations:

      Intel i386/i486        running Linux 2.0.21
      Sun Sparcs (various)   running SunOS 4.1.4 and Solaris 2.5
      Intel i386/i486        running Windows 95 and NT
      DEC Alpha              running Digital Unix 4.0

   Following the release of bzip-0.21, many people mailed me
   from around the world to say they had made it work on all sorts
   of weird and wonderful machines.  Chances are, if you have
   a reasonable ANSI C compiler and a 32-bit machine, you can
   get it to work.

   The #defines starting at around line 82 of bzip2.c supply some
   degree of platform-independance.  If you configure bzip2 for some
   new far-out platform which is not covered by the existing definitions,
   please send me the relevant definitions.

   I recommend GNU C for compilation.  The code is standard ANSI C,
   except for the Unix-specific file handling, so any ANSI C compiler
   should work.  Note however that the many routines marked INLINE
   should be inlined by your compiler, else performance will be very
   poor.  Asking your compiler to unroll loops gives some
   small improvement too; for gcc, the relevant flag is
   -funroll-loops.

   On a 386/486 machines, I'd recommend giving gcc the
   -fomit-frame-pointer flag; this liberates another register for
   allocation, which measurably improves performance.

   I used the abovementioned lcc compiler to develop bzip2.
   I would highly recommend this compiler for day-to-day development;
   it is fast, reliable, lightweight, has an excellent profiler,
   and is generally excellent.  And it's fun to retarget, if you're
   into that kind of thing.

   If you compile bzip2 on a new platform or with a new compiler,
   please be sure to run the four compress-decompress tests, either
   using the Makefile, or with the test.bat (MSDOS) or test.cmd (OS/2)
   files.  Some compilers have been seen to introduce subtle bugs
   when optimising, so this check is important.  Ideally you should
   then go on to test bzip2 on a file several megabytes or even
   tens of megabytes long, just to be 110% sure.  ``Professional
   programmers are paranoid programmers.'' (anon).

VALIDATION

   Correct operation, in the sense that a compressed file can always be
   decompressed to reproduce the original, is obviously of paramount
   importance.  To validate bzip2, I used a modified version of 
   Mark Nelson's churn program.  Churn is an automated test driver
   which recursively traverses a directory structure, using bzip2 to
   compress and then decompress each file it encounters, and checking
   that the decompressed data is the same as the original.  As test 
   material, I used several runs over several filesystems of differing
   sizes.

   One set of tests was done on my base Linux filesystem,
   410 megabytes in 23,000 files.  There were several runs over
   this filesystem, in various configurations designed to break bzip2.
   That filesystem also contained some specially constructed test
   files designed to exercise boundary cases in the code.
   This included files of zero length, various long, highly repetitive 
   files, and some files which generate blocks with all values the same.

   The other set of tests was done just with the "normal" configuration,
   but on a much larger quantity of data.

      Tests are:

         Linux FS, 410M, 23000 files

         As above, with --repetitive-fast

         As above, with -1

         Low level disk image of a disk containing
            Windows NT4.0; 420M in a single huge file

         Linux distribution, incl Slackware, 
            all GNU sources.   1900M in 2300 files.

         Approx ~100M compiler sources and related
            programming tools, running under Purify.

         About 500M of data in 120 files of around
            4 M each.  This is raw data from a 
            biomagnetometer (SQUID-based thing).

      Overall, total volume of test data is about
         3300 megabytes in 25000 files.

   The distribution does four tests after building bzip.  These tests
   include test decompressions of pre-supplied compressed files, so
   they not only test that bzip works correctly on the machine it was
   built on, but can also decompress files compressed on a different
   machine.  This guards against unforseen interoperability problems.


Please read and be aware of the following:

WARNING:

   This program (attempts to) compress data by performing several
   non-trivial transformations on it.  Unless you are 100% familiar
   with *all* the algorithms contained herein, and with the
   consequences of modifying them, you should NOT meddle with the
   compression or decompression machinery.  Incorrect changes can and
   very likely *will* lead to disastrous loss of data.


DISCLAIMER:

   I TAKE NO RESPONSIBILITY FOR ANY LOSS OF DATA ARISING FROM THE
   USE OF THIS PROGRAM, HOWSOEVER CAUSED.

   Every compression of a file implies an assumption that the
   compressed file can be decompressed to reproduce the original.
   Great efforts in design, coding and testing have been made to
   ensure that this program works correctly.  However, the complexity
   of the algorithms, and, in particular, the presence of various
   special cases in the code which occur with very low but non-zero
   probability make it impossible to rule out the possibility of bugs
   remaining in the program.  DO NOT COMPRESS ANY DATA WITH THIS
   PROGRAM UNLESS YOU ARE PREPARED TO ACCEPT THE POSSIBILITY, HOWEVER
   SMALL, THAT THE DATA WILL NOT BE RECOVERABLE.

   That is not to say this program is inherently unreliable.  Indeed,
   I very much hope the opposite is true.  bzip2 has been carefully
   constructed and extensively tested.


PATENTS:

   To the best of my knowledge, bzip2 does not use any patented
   algorithms.  However, I do not have the resources available to
   carry out a full patent search.  Therefore I cannot give any
   guarantee of the above statement.

End of legalities.


I hope you find bzip2 useful.  Feel free to contact me at
   jseward@acm.org
if you have any suggestions or queries.  Many people mailed me with
comments, suggestions and patches after the releases of 0.15 and 0.21, 
and the changes in bzip2 are largely a result of this feedback.
I thank you for your comments.

Julian Seward

Manchester, UK
18 July 1996 (version 0.15)
25 August 1996 (version 0.21)

Guildford, Surrey, UK
7 August 1997 (bzip2, version 0.1)
29 August 1997 (bzip2, version 0.1pl2)
=======================================================================



bzip2(1)                                                 bzip2(1)


NNAAMMEE
       bzip2 - a block-sorting file compressor, v0.1


SSYYNNOOPPSSIISS
       bzip2 [ -cdfkstvVL123456789 ] [ filenames ...  ]

DDEESSCCRRIIPPTTIIOONN
       Bzip2  compresses  files  using the Burrows-Wheeler block-
       sorting text compression algorithm,  and  Huffman  coding.
       Compression  is  generally  considerably  better than that
       achieved by more conventional LZ77/LZ78-based compressors,
       and  approaches  the performance of the PPM family of sta-
       tistical compressors.

       The command-line options are deliberately very similar  to
       those of GNU Gzip, but they are not identical.

       Bzip2  expects  a list of file names to accompany the com-
       mand-line flags.  Each file is replaced  by  a  compressed
       version  of  itself,  with  the  name "originalname.bz2".
       Each compressed file has the same  modification  date  and
       permissions  as  the corresponding original, so that these
       properties can  be  correctly  restored  at  decompression
       time.  File name handling is naive in the sense that there
       is no mechanism for preserving original file  names,  per-
       missions  and  dates  in filesystems which lack these con-
       cepts, or have serious file name length restrictions, such
       as MS-DOS.

       Bzip2  and  bunzip2  will not overwrite existing files; if
       you want this to happen, you should delete them first.

       If no file names  are  specified,  bzip2  compresses  from
       standard  input  to  standard output.  In this case, bzip2
       will decline to write compressed output to a terminal,  as
       this  would  be  entirely  incomprehensible  and therefore
       pointless.

       Bunzip2 (or bzip2 -d ) decompresses and restores all spec-
       ified files whose names end in ".bz2".  Files without this
       suffix are ignored.  Again, supplying no filenames  causes
       decompression from standard input to standard output.

       You  can also compress or decompress files to the standard
       output by giving the -c flag.  You can decompress multiple
       files  like  this, but you may only compress a single file
       this way, since it would otherwise be difficult  to  sepa-
       rate  out  the  compressed representations of the original
       files.

       Compression is always performed, even  if  the  compressed
       file  is slightly larger than the original.  Files of less
       than about one hundred bytes tend to get larger, since the
       compression  mechanism  has  a  constant  overhead  in the
       region of 50 bytes.  Random data (including the output  of
       most  file  compressors)  is  coded at about 8.05 bits per
       byte, giving an expansion of around 0.5%.

       As a self-check for your  protection,  bzip2  uses  32-bit
       CRCs  to make sure that the decompressed version of a file
       is identical to the original.  This guards against corrup-
       tion  of  the compressed data, and against undetected bugs
       in bzip2 (hopefully very unlikely).  The chances  of  data
       corruption  going  undetected  is  microscopic,  about one
       chance in four billion for each file processed.  Be aware,
       though,  that  the  check occurs upon decompression, so it
       can only tell you that that something is wrong.  It  can't
       help  you recover the original uncompressed data.  You can
       use bzip2recover to  try  to  recover  data  from  damaged
       files.

       Return  values:  0  for a normal exit, 1 for environmental
       problems (file not found, invalid flags, I/O errors,  &c),
       2 to indicate a corrupt compressed file, 3 for an internal
       consistency error (eg, bug) which caused bzip2 to panic.


MEMORY MANAGEMENT
       Bzip2 compresses large files in blocks.   The  block  size
       affects  both  the  compression  ratio  achieved,  and the
       amount of memory needed both for  compression  and  decom-
       pression.   The flags -1 through -9 specify the block size
       to be 100,000 bytes through 900,000  bytes  (the  default)
       respectively.   At decompression-time, the block size used
       for compression is read from the header of the  compressed
       file, and bunzip2 then allocates itself just enough memory
       to decompress the file.  Since block sizes are  stored  in
       compressed  files,  it follows that the flags -1 to -9 are
       irrelevant to and so ignored during  decompression.   Com-
       pression  and decompression requirements, in bytes, can be
       estimated as:

             Compression:   400k + ( 7 x block size )

             Decompression: 100k + ( 5 x block size ), or
                            100k + ( 2.5 x block size )

       Larger  block  sizes  give  rapidly  diminishing  marginal
       returns;  most of the compression comes from the first two
       or three hundred k of block size, a fact worth bearing  in
       mind  when  using  bzip2  on  small  machines.  It is also
       important to  appreciate  that  the  decompression  memory
       requirement  is  set  at compression-time by the choice of
       block size.

       For files compressed with the  default  900k  block  size,
       bunzip2  will require about 4600 kbytes to decompress.  To
       support decompression of any file on a 4 megabyte machine,
       bunzip2  has  an  option to decompress using approximately
       half this amount of memory, about 2300 kbytes.  Decompres-
       sion  speed  is also halved, so you should use this option
       only where necessary.  The relevant flag is -s.

       In general, try and use the largest block size memory con-
       straints  allow,  since  that  maximises  the  compression
       achieved.  Compression and decompression speed are  virtu-
       ally unaffected by block size.

       Another  significant point applies to files which fit in a
       single block -- that  means  most  files  you'd  encounter
       using  a  large  block  size.   The  amount of real memory
       touched is proportional to the size of the file, since the
       file  is smaller than a block.  For example, compressing a
       file 20,000 bytes long with the flag  -9  will  cause  the
       compressor  to  allocate  around 6700k of memory, but only
       touch 400k + 20000 * 7 = 540 kbytes of it.  Similarly, the
       decompressor  will  allocate  4600k  but only touch 100k +
       20000 * 5 = 200 kbytes.

       Here is a table which summarises the maximum memory  usage
       for  different  block  sizes.   Also recorded is the total
       compressed size for 14 files of the Calgary Text  Compres-
       sion  Corpus totalling 3,141,622 bytes.  This column gives
       some feel for how  compression  varies  with  block  size.
       These  figures  tend to understate the advantage of larger
       block sizes for larger files, since the  Corpus  is  domi-
       nated by smaller files.

                  Compress   Decompress   Decompress   Corpus
           Flag     usage      usage       -s usage     Size

            -1      1100k       600k         350k      914704
            -2      1800k      1100k         600k      877703
            -3      2500k      1600k         850k      860338
            -4      3200k      2100k        1100k      846899
            -5      3900k      2600k        1350k      845160
            -6      4600k      3100k        1600k      838626
            -7      5400k      3600k        1850k      834096
            -8      6000k      4100k        2100k      828642
            -9      6700k      4600k        2350k      828642


OPTIONS
       -c --stdout
              Compress or decompress to standard output.  -c will
              decompress multiple files to stdout, but will  only
              compress a single file to stdout.

       -d --decompress
              Force  decompression.  Bzip2 and bunzip2 are really
              the same program, and the decision about whether to
              compress  or  decompress  is  done  on the basis of
              which name is used.  This flag overrides that mech-
              anism, and forces bzip2 to decompress.

       -f --compress
              The  complement  to -d: forces compression, regard-
              less of the invokation name.

       -t --test
              Check integrity of the specified file(s), but don't
              decompress  them.   This  really  performs  a trial
              decompression and throws away the result, using the
              low-memory decompression algorithm (see -s).

       -k --keep
              Keep  (don't delete) input files during compression
              or decompression.

       -s --small
              Reduce  memory  usage,  both  for  compression  and
              decompression.  Files are decompressed using a mod-
              ified algorithm which only requires 2.5  bytes  per
              block  byte.   This  means  any  file can be decom-
              pressed in 2300k of memory,  albeit  somewhat  more
              slowly than usual.

              During  compression,  -s  selects  a  block size of
              200k, which limits memory use to  around  the  same
              figure,  at  the expense of your compression ratio.
              In short, if your  machine  is  low  on  memory  (8
              megabytes  or  less),  use  -s for everything.  See
              MEMORY MANAGEMENT above.

       -v --verbose
              Verbose mode -- show the compression ratio for each
              file  processed.   Further  -v's  increase the ver-
              bosity level, spewing out lots of information which
              is primarily of interest for diagnostic purposes.

       -L --license
              Display  the  software  version,  license terms and
              conditions.

       -V --version
              Same as -L.

       -1 to -9
              Set the block size to 100 k, 200 k ..  900  k  when
              compressing.   Has  no  effect  when decompressing.
              See MEMORY MANAGEMENT above.

       --repetitive-fast
              bzip2 injects some small  pseudo-random  variations
              into  very  repetitive  blocks  to limit worst-case
              performance during compression.   If  sorting  runs
              into  difficulties,  the  block  is randomised, and
              sorting is restarted.  Very roughly, bzip2 persists
              for  three  times  as  long as a well-behaved input
              would take before resorting to randomisation.  This
              flag makes it give up much sooner.

       --repetitive-best
              Opposite  of  --repetitive-fast;  try  a lot harder
              before resorting to randomisation.


RECOVERING DATA FROM DAMAGED FILES
       bzip2 compresses files in blocks, usually 900kbytes  long.
       Each block is handled independently.  If a media or trans-
       mission error causes a multi-block  .bz2  file  to  become
       damaged,  it  may  be  possible  to  recover data from the
       undamaged blocks in the file.

       The compressed representation of each block  is  delimited
       by  a  48-bit pattern, which makes it possible to find the
       block boundaries with reasonable  certainty.   Each  block
       also  carries its own 32-bit CRC, so damaged blocks can be
       distinguished from undamaged ones.

       bzip2recover is a  simple  program  whose  purpose  is  to
       search  for blocks in .bz2 files, and write each block out
       into its own .bz2 file.  You can then use bzip2 -t to test
       the integrity of the resulting files, and decompress those
       which are undamaged.

       bzip2recover takes a single argument, the name of the dam-
       aged file, and writes a number of files "rec0001file.bz2",
       "rec0002file.bz2", etc, containing the  extracted  blocks.
       The output filenames are designed so that the use of wild-
       cards in subsequent processing -- for example, "bzip2  -dc
       rec*file.bz2  >  recovereddata" -- lists the files in the
       "right" order.

       bzip2recover should be of most use dealing with large .bz2
       files,  as  these will contain many blocks.  It is clearly
       futile to use it on damaged single-block  files,  since  a
       damaged  block  cannot  be recovered.  If you wish to min-
       imise any potential data loss through media  or  transmis-
       sion errors, you might consider compressing with a smaller
       block size.


PERFORMANCE NOTES
       The sorting phase of compression gathers together  similar
       strings  in  the  file.  Because of this, files containing
       very long runs of  repeated  symbols,  like  "aabaabaabaab
       ..."   (repeated   several  hundred  times)  may  compress
       extraordinarily slowly.  You can use the -vvvvv option  to
       monitor progress in great detail, if you want.  Decompres-
       sion speed is unaffected.

       Such pathological cases seem rare in  practice,  appearing
       mostly in artificially-constructed test files, and in low-
       level disk images.  It may be inadvisable to use bzip2  to
       compress  the  latter.   If you do get a file which causes
       severe slowness in compression, try making the block  size
       as small as possible, with flag -1.

       Incompressible or virtually-incompressible data may decom-
       press rather more slowly than one would hope.  This is due
       to a naive implementation of the move-to-front coder.

       bzip2  usually  allocates  several  megabytes of memory to
       operate in, and then charges all over it in a fairly  ran-
       dom  fashion.   This means that performance, both for com-
       pressing and decompressing, is largely determined  by  the
       speed  at  which  your  machine  can service cache misses.
       Because of this, small changes to the code to  reduce  the
       miss  rate  have  been observed to give disproportionately
       large performance improvements.  I imagine bzip2 will per-
       form best on machines with very large caches.

       Test mode (-t) uses the low-memory decompression algorithm
       (-s).  This means test mode does not run  as  fast  as  it
       could;  it  could  run as fast as the normal decompression
       machinery.  This could easily be fixed at the cost of some
       code bloat.

CAVEATS
       I/O  error  messages  are not as helpful as they could be.
       Bzip2 tries hard to detect I/O errors  and  exit  cleanly,
       but  the  details  of  what  the problem is sometimes seem
       rather misleading.

       This manual page pertains to version 0.1 of bzip2.  It may
       well  happen that some future version will use a different
       compressed file format.  If you try to  decompress,  using
       0.1,  a  .bz2  file created with some future version which
       uses a different compressed file format, 0.1 will complain
       that  your  file  "is not a bzip2 file".  If that happens,
       you should obtain a more recent version of bzip2  and  use
       that to decompress the file.

       Wildcard expansion for Windows 95 and NT is flaky.

       bzip2recover  uses  32-bit integers to represent bit posi-
       tions in compressed files, so it cannot handle  compressed
       files  more than 512 megabytes long.  This could easily be
       fixed.

       bzip2recover sometimes reports a  very  small,  incomplete
       final  block.  This is spurious and can be safely ignored.


RELATIONSHIP TO bzip-0.21
       This program is a descendant of the bzip program,  version
       0.21,  which  I released in August 1996.  The primary dif-
       ference of bzip2 is its avoidance of the possibly patented
       algorithms  which  were  used  in 0.21.  bzip2 also brings
       various useful refinements (-s,  -t),  uses  less  memory,
       decompresses  significantly  faster,  and  has support for
       recovering data from damaged files.

       Because bzip2 uses Huffman coding to  construct  the  com-
       pressed  bitstream, rather than the arithmetic coding used
       in 0.21, the compressed representations generated  by  the
       two  programs are incompatible, and they will not interop-
       erate.  The change in suffix from  .bz  to  .bz2  reflects
       this.   It would have been helpful to at least allow bzip2
       to decompress files created by 0.21, but this would defeat
       the primary aim of having a patent-free compressor.

       For a more precise statement about patent issues in bzip2,
       please see the README file in the distribution.

       Huffman  coding  necessarily  involves some coding ineffi-
       ciency compared to arithmetic  coding.   This  means  that
       bzip2  compresses about 1% worse than 0.21, an unfortunate
       but unavoidable fact-of-life.  On the other  hand,  decom-
       pression  is approximately 50% faster for the same reason,
       and the change in file format gave an opportunity  to  add
       data-recovery features.  So it is not all bad.


AUTHOR
       Julian Seward, jseward@acm.org.

       The ideas embodied in bzip and bzip2 are due to (at least)
       the following people: Michael Burrows  and  David  Wheeler
       (for  the  block  sorting  transformation),  David Wheeler
       (again, for the Huffman coder),  Peter  Fenwick  (for  the
       structured  coding  model  in 0.21, and many refinements),
       and Alistair Moffat, Radford Neal and Ian Witten (for  the
       arithmetic  coder  in 0.21).  I am much indebted for their
       help, support and advice.  See the file ALGORITHMS in  the
       source  distribution for pointers to sources of documenta-
       tion.  Christian von Roques  encouraged  me  to  look  for
       faster  sorting algorithms, so as to speed up compression.
       Bela Lubkin encouraged me to improve the  worst-case  com-
       pression  performance.   Many  people sent patches, helped
       with portability problems, lent machines, gave advice  and
       were generally helpful.
=========================================================================
		    GNU GENERAL PUBLIC LICENSE
		       Version 2, June 1991

 Copyright (C) 1989, 1991 Free Software Foundation, Inc.
                          675 Mass Ave, Cambridge, MA 02139, USA
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

			    Preamble

  The licenses for most software are designed to take away your
freedom to share and change it.  By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users.  This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it.  (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.)  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.

  To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have.  You must make sure that they, too, receive or can get the
source code.  And you must show them these terms so they know their
rights.

  We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.

  Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software.  If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.

  Finally, any free program is threatened constantly by software
patents.  We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary.  To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.

  The precise terms and conditions for copying, distribution and
modification follow.

		    GNU GENERAL PUBLIC LICENSE
   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

  0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License.  The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language.  (Hereinafter, translation is included without limitation in
the term "modification".)  Each licensee is addressed as "you".

Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope.  The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.

  1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.

You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.

  2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:

    a) You must cause the modified files to carry prominent notices
    stating that you changed the files and the date of any change.

    b) You must cause any work that you distribute or publish, that in
    whole or in part contains or is derived from the Program or any
    part thereof, to be licensed as a whole at no charge to all third
    parties under the terms of this License.

    c) If the modified program normally reads commands interactively
    when run, you must cause it, when started running for such
    interactive use in the most ordinary way, to print or display an
    announcement including an appropriate copyright notice and a
    notice that there is no warranty (or else, saying that you provide
    a warranty) and that users may redistribute the program under
    these conditions, and telling the user how to view a copy of this
    License.  (Exception: if the Program itself is interactive but
    does not normally print such an announcement, your work based on
    the Program is not required to print an announcement.)

These requirements apply to the modified work as a whole.  If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works.  But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.

Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.

In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.

  3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:

    a) Accompany it with the complete corresponding machine-readable
    source code, which must be distributed under the terms of Sections
    1 and 2 above on a medium customarily used for software interchange; or,

    b) Accompany it with a written offer, valid for at least three
    years, to give any third party, for a charge no more than your
    cost of physically performing source distribution, a complete
    machine-readable copy of the corresponding source code, to be
    distributed under the terms of Sections 1 and 2 above on a medium
    customarily used for software interchange; or,

    c) Accompany it with the information you received as to the offer
    to distribute corresponding source code.  (This alternative is
    allowed only for noncommercial distribution and only if you
    received the program in object code or executable form with such
    an offer, in accord with Subsection b above.)

The source code for a work means the preferred form of the work for
making modifications to it.  For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable.  However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.

If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.

  4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License.  Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.

  5. You are not required to accept this License, since you have not
signed it.  However, nothing else grants you permission to modify or
distribute the Program or its derivative works.  These actions are
prohibited by law if you do not accept this License.  Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.

  6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions.  You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.

  7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all.  For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.

If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.

It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices.  Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.

This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.

  8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded.  In such case, this License incorporates
the limitation as if written in the body of this License.

  9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

Each version is given a distinguishing version number.  If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation.  If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.

  10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission.  For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this.  Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.

			    NO WARRANTY

  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.

  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

		     END OF TERMS AND CONDITIONS

	Appendix: How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    
    Copyright (C) 19yy  

    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation; either version 2 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program; if not, write to the Free Software
    Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.

Also add information on how to contact you by electronic and paper mail.

If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:

    Gnomovision version 69, Copyright (C) 19yy name of author
    Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.

You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary.  Here is a sample; alter the names:

  Yoyodyne, Inc., hereby disclaims all copyright interest in the program
  `Gnomovision' (which makes passes at compilers) written by James Hacker.

  , 1 April 1989
  Ty Coon, President of Vice

This General Public License does not permit incorporating your program into
proprietary programs.  If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library.  If this is what you want to do, use the GNU Library General
Public License instead of this License.



AFNI file: README.changes
***  This file is no longer maintained.  See the Web page      *** 
***                                                            *** 
***    http://afni.nimh.nih.gov/afni/afni_latest.html          *** 
***                                                            *** 
***  for information on the latest changes to the AFNI package *** 
***                                                            *** 
***  --- Bob Cox, January 2000                                 *** 



AFNI file: README.compression
Compressed Dataset .BRIK Files
==============================
AFNI now supports the use of compressed .BRIK files.  The routines
that open and read these files detect the compression mode using
the filename suffix, and will use the correct decompression program
to read them in from disk.  The character 'z' is added to the end
of a dataset's listing in the AFNI menus if the .BRIK is compressed;
for example, "elvis [epan]z".

No other files used by AFNI can be compressed and still be readable
by the software.  This includes the .HEAD files, timeseries (.1D)
files, etc.  Note also that the programs 2swap and 4swap don't
do compression or decompression, so that if you need to do byte
swapping on a compressed .BRIK file, you must manually decompress
it, swap the bytes, and (optionally) recompress the file.

How to Compress
===============
You can compress the .BRIK files manually.  The following 3 programs
are supported:

  Name      Suffix  Compression Command  Uncompress Command
  --------  ------  -------------------  -------------------
  compress   .Z     compress -v *.BRIK   uncompress *.BRIK.Z
  gzip       .gz    gzip -1v *.BRIK      gzip -d *.BRIK.gz
  bzip2      .bz2   bzip2 -1v *.BRIK     bzip2 -d *.BRIK.bz2

  "compress" is available on almost all Unix systems.
  "gzip" is available on many Unix systems, and can also be
     ftp-ed from the AFNI distribution site.
  "bzip2" is included in the AFNI distribution.  It generally
     compresses more than the other two programs, but is much
     slower at both compression and uncompression.  (See the
     file README.bzip2 for details about this program.)

For large MR image datasets, "compress" and "gzip" have about the
same compression factor and take about the same CPU time (at least
in the samples I've tried here.)

Do NOT compress the .HEAD files!  AFNI will not be able to read them.

Automatic Compression
=====================
If you set the environment variable AFNI_COMPRESSOR to one of
the strings "COMPRESS", "GZIP", or "BZIP2", then most programs
will automatically pass .BRIK data through the appropriate
compression program as it is written to disk.  Note that this
will slow down dataset write operations.

Penalties for Using Compression
===============================
Datasets must be uncompressed when they are read into AFNI (or other
programs), which takes time.  In AFNI itself, a dataset .BRIK file
is only read into the program when its values are actually needed
-- when an image or graph window is opened.  When this happens, or
when you "Switch" to a compressed dataset, there can be a noticeable
delay.  For "compress" and "gzip", this may be a few seconds.  For
"bzip2", the delays will generally be longer.

The speed penalty means that it is probably best to keep the
datasets you are actively using in uncompressed form.  This can
be done by compressing datasets manually, and avoiding the use
of AFNI_COMPRESSOR (which will compress all .BRIKs).  Datasets
that you want to keep on disk, but don't think you will use
often, can be compressed.  They can still be viewed when the
need arises without manual decompression.

Large .BRIK files are normally directly mapped to memory.  This
technique saves system swap space, but isn't useful with compressed
files.  Compressed .BRIK files are read into "malloc" allocated
memory, which will take up swap space.  This may limit the number
of datasets that can be used at once.  AFNI will try to purge unused
datasets from memory if a problem arises, but it may not succeed.
If necessary, the "-purge" option can be used when starting AFNI.

Very large datasets (larger than the amount of RAM on your system)
should not be compressed, since it will be impossible to read such
an object into memory in its entirety.  It is better to rely on
the memory mapping facility in such cases.

Effect on Plugins and Other Programs
====================================
If you use the AFNI supplied routines to read in a dataset, then
everything should work well with compressed .BRIK files.  You can
tell if a dataset is compressed after you open it by using the
DSET_COMPRESSED(dset) macro -- it returns 1 if "dset" is compressed,
0 otherwise.

How it Works
============
Using Unix pipes.  Files are opened with COMRESS_fopen_read or
COMPRESS_fopen_write, and closed with COMPRESS_fclose.  The code
is in files thd_compress.[ch], if you want to have fun.  If you
have a better compression utility that can operate as a filter,
let me know and I can easily include it in the AFNI package.

=================================
| Robert W. Cox, PhD            |
| Biophysics Research Institute |
| Medical College of Wisconsin  |
=================================



AFNI file: README.copyright

  Major portions of this software are Copyright 1994-2000 by

            Medical College of Wisconsin
            8701 Watertown Plank Road
            Milwaukee, WI 53226

  Development of these portions was supported by MCW internal funds, and
  also in part by NIH grants MH51358 (PI: JS Hyde) and NS34798 (PI: RW Cox).

  *** This software was designed to be used only for research purposes. ***
  *** Clinical applications are not recommended, and this software has  ***
  *** NOT been evaluated by the United States FDA for any clinical use. ***

  Neither the Medical College of Wisconsin (MCW), the National Institutes
  of Health (NIH), nor any of the authors or their institutions make or
  imply any warranty of usefulness of this software for any particular
  purpose, and do not assume any liability for damages, incidental or
  otherwise, caused by the installation or use of this software.  If
  these conditions are not acceptable to you or your institution, or are
  not enforceable by the laws of your jurisdiction, you do not have the
  right use this software.

  The MCW-copyrighted part of this software is released to the public under
  the GNU General Public License, Version 2.  A copy of this License is
  appended.  The final reference copy of the software that was fully derived
  from MCW is in the tar/gzip archive file afni98_lastmcw.tgz.  (This does
  NOT mean that later code is not copyrighted by MCW - that depends on the
  source file involved.  It simply means that some code developed later comes
  from the NIH, and is not copyrighted.  Other parts developed or contributed
  later are from MCW or other institutions that still maintain their copyright,
  but who release the code under the GPL.)

  The MCW-copyrighted part of the documentation is released to the public
  under the Open Content License (OCL).  A copy of this license is appended.

  These licensing conditions supersede any other conditions on licensing
  or distribution that may be found in the files or documents distributed
  with this software package.

  Other Components
  ----------------
  Components of this software and its documentation developed at the US
  National Institutes of Health (after 15 Jan 2001) are not copyrighted.
  Components of the software and documentation contributed by people at
  other institutions are released under the GPL and OCL (respectively),
  but copyright may be retained by them or their institutions.

  The Talairach Daemon data are incorporated with permission from
  the Research Imaging Center at the University of Texas Health Sciences
  Center at San Antonio.  Thanks go to Drs. Jack Lancaster and Peter Fox
  for sharing this database.

  The netCDF functions included are from the netCDF library from
  the Unidata Program at the University Corporation for Atmospheric
  Research (http://www.unidata.ucar.edu/) and distributed in accordance
  with their Copyright notice (which of course disavows any liability
  for anything).

  The CDF library routines were developed at the University of Texas
  M.D. Anderson Cancer Center, and have been placed into the public domain.
  See the file "cdflib.txt" for more details.

  The eis_*.c functions are C translations of the EISPACK library,
  distributed by Netlib: http://www.netlib.org

  Some of the routines in "mri_stats.c" are from the StatLib repository at
  Carnegie Mellon: http://lib.stat.cmu.edu

  Some of the routines in "mcw_glob.c" are derived from the Berkeley Unix
  distribution.  See that file for their copyright declaration.

  The popup hint functions in "LiteClue.c" are from Computer Generation, Inc.
  See that file for their copyright declaration.

  The volume rendering library "volpack" is by Phil Lacroute, and is copy-
  righted by The Board of Trustees of The Leland Stanford Junior University.
  See file "volpack.h" for their copyright declaration.

  The MD5 routines in thd_md5.c are adapted from the functions in RFC1321
  by R Rivest, and so are derived from the RSA Data Security, Inc MD5
  Message-Digest Algorithm.  See file "thd_md5.c" for the RSA Copyright
  notice.

  The SVM-light software included is by Thorsten Joachims of Cornell
  University, and is redistributed in the AFNI package by permission.
  If you use this software, please cite the paper
      T. Joachims, Making large-Scale SVM Learning Practical.
        Advances in Kernel Methods - Support Vector Learning,
        B. Scholkopf and C. Burges and A. Smola (ed.), MIT-Press, 1999. 
  The SVM-light software is free only for non-commercial use. It must not be
  distributed without prior permission of the author. The author is not
  responsible for implications from the use of this software.

  The sonnets of William Shakespeare are not copyrighted.  At that time --
  the most creative literary period in history -- there was no copyright.
  Whoever says that copyright is NECESSARY to ensure artistic and/or
  intellectual creativity should explain this historical fact.

  ============================================================================

                         GNU GENERAL PUBLIC LICENSE
                            Version 2, June 1991

  Copyright (C) 1989, 1991 Free Software Foundation, Inc. 675 Mass
  Ave, Cambridge, MA 02139, USA. Everyone is permitted to copy and
  distribute verbatim copies of this license document, but changing it
  is not allowed.

                                Preamble

  The licenses for most software are designed to take away your
  freedom to share and change it. By contrast, the GNU General Public
  License is intended to guarantee your freedom to share and change
  free software--to make sure the software is free for all its users.
  This General Public License applies to most of the Free Software
  Foundation's software and to any other program whose authors commit
  to using it. (Some other Free Software Foundation software is
  covered by the GNU Library General Public License instead.) You can
  apply it to your programs, too.

  When we speak of free software, we are referring to freedom, not
  price. Our General Public Licenses are designed to make sure that
  you have the freedom to distribute copies of free software (and
  charge for this service if you wish), that you receive source code
  or can get it if you want it, that you can change the software or
  use pieces of it in new free programs; and that you know you can do
  these things.

  To protect your rights, we need to make restrictions that forbid
  anyone to deny you these rights or to ask you to surrender the
  rights. These restrictions translate to certain responsibilities for
  you if you distribute copies of the software, or if you modify it.

  For example, if you distribute copies of such a program, whether
  gratis or for a fee, you must give the recipients all the rights
  that you have. You must make sure that they, too, receive or can get
  the source code. And you must show them these terms so they know
  their rights.

  We protect your rights with two steps: (1) copyright the software,
  and (2) offer you this license which gives you legal permission to
  copy, distribute and/or modify the software.

  Also, for each author's protection and ours, we want to make certain
  that everyone understands that there is no warranty for this free
  software. If the software is modified by someone else and passed on,
  we want its recipients to know that what they have is not the
  original, so that any problems introduced by others will not reflect
  on the original authors' reputations.

  Finally, any free program is threatened constantly by software
  patents. We wish to avoid the danger that redistributors of a free
  program will individually obtain patent licenses, in effect making
  the program proprietary. To prevent this, we have made it clear that
  any patent must be licensed for everyone's free use or not licensed
  at all.

  The precise terms and conditions for copying, distribution and
  modification follow.

                       GNU GENERAL PUBLIC LICENSE
    TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

  0. This License applies to any program or other work which contains
  a notice placed by the copyright holder saying it may be distributed
  under the terms of this General Public License. The "Program",
  below, refers to any such program or work, and a "work based on the
  Program" means either the Program or any derivative work under
  copyright law: that is to say, a work containing the Program or a
  portion of it, either verbatim or with modifications and/or
  translated into another language. (Hereinafter, translation is
  included without limitation in the term "modification".) Each
  licensee is addressed as "you".

  Activities other than copying, distribution and modification are not
  covered by this License; they are outside its scope. The act of
  running the Program is not restricted, and the output from the
  Program is covered only if its contents constitute a work based on
  the Program (independent of having been made by running the
  Program). Whether that is true depends on what the Program does.

  1. You may copy and distribute verbatim copies of the Program's
  source code as you receive it, in any medium, provided that you
  conspicuously and appropriately publish on each copy an appropriate
  copyright notice and disclaimer of warranty; keep intact all the
  notices that refer to this License and to the absence of any
  warranty; and give any other recipients of the Program a copy of
  this License along with the Program.

  You may charge a fee for the physical act of transferring a copy,
  and you may at your option offer warranty protection in exchange for
  a fee.

  2. You may modify your copy or copies of the Program or any portion
  of it, thus forming a work based on the Program, and copy and
  distribute such modifications or work under the terms of Section 1
  above, provided that you also meet all of these conditions:

  a) You must cause the modified files to carry prominent notices
  stating that you changed the files and the date of any change.

  b) You must cause any work that you distribute or publish, that in
  whole or in part contains or is derived from the Program or any part
  thereof, to be licensed as a whole at no charge to all third parties
  under the terms of this License.

  c) If the modified program normally reads commands interactively
  when run, you must cause it, when started running for such
  interactive use in the most ordinary way, to print or display an
  announcement including an appropriate copyright notice and a notice
  that there is no warranty (or else, saying that you provide a
  warranty) and that users may redistribute the program under these
  conditions, and telling the user how to view a copy of this License.
  (Exception: if the Program itself is interactive but does not
  normally print such an announcement, your work based on the Program
  is not required to print an announcement.)

  These requirements apply to the modified work as a whole. If
  identifiable sections of that work are not derived from the Program,
  and can be reasonably considered independent and separate works in
  themselves, then this License, and its terms, do not apply to those
  sections when you distribute them as separate works. But when you
  distribute the same sections as part of a whole which is a work
  based on the Program, the distribution of the whole must be on the
  terms of this License, whose permissions for other licensees extend
  to the entire whole, and thus to each and every part regardless of
  who wrote it.

  Thus, it is not the intent of this section to claim rights or
  contest your rights to work written entirely by you; rather, the
  intent is to exercise the right to control the distribution of
  derivative or collective works based on the Program.

  In addition, mere aggregation of another work not based on the
  Program with the Program (or with a work based on the Program) on a
  volume of a storage or distribution medium does not bring the other
  work under the scope of this License.

  3. You may copy and distribute the Program (or a work based on it,
  under Section 2) in object code or executable form under the terms
  of Sections 1 and 2 above provided that you also do one of the
  following:

  a) Accompany it with the complete corresponding machine-readable
  source code, which must be distributed under the terms of Sections 1
  and 2 above on a medium customarily used for software interchange;
  or,

  b) Accompany it with a written offer, valid for at least three
  years, to give any third party, for a charge no more than your cost
  of physically performing source distribution, a complete
  machine-readable copy of the corresponding source code, to be
  distributed under the terms of Sections 1 and 2 above on a medium
  customarily used for software interchange; or,

  c) Accompany it with the information you received as to the offer to
  distribute corresponding source code. (This alternative is allowed
  only for noncommercial distribution and only if you received the
  program in object code or executable form with such an offer, in
  accord with Subsection b above.)

  The source code for a work means the preferred form of the work for
  making modifications to it. For an executable work, complete source
  code means all the source code for all modules it contains, plus any
  associated interface definition files, plus the scripts used to
  control compilation and installation of the executable. However, as
  a special exception, the source code distributed need not include
  anything that is normally distributed (in either source or binary
  form) with the major components (compiler, kernel, and so on) of the
  operating system on which the executable runs, unless that component
  itself accompanies the executable.

  If distribution of executable or object code is made by offering
  access to copy from a designated place, then offering equivalent
  access to copy the source code from the same place counts as
  distribution of the source code, even though third parties are not
  compelled to copy the source along with the object code.

  4. You may not copy, modify, sublicense, or distribute the Program
  except as expressly provided under this License. Any attempt
  otherwise to copy, modify, sublicense or distribute the Program is
  void, and will automatically terminate your rights under this
  License. However, parties who have received copies, or rights, from
  you under this License will not have their licenses terminated so
  long as such parties remain in full compliance.

  5. You are not required to accept this License, since you have not
  signed it. However, nothing else grants you permission to modify or
  distribute the Program or its derivative works. These actions are
  prohibited by law if you do not accept this License. Therefore, by
  modifying or distributing the Program (or any work based on the
  Program), you indicate your acceptance of this License to do so, and
  all its terms and conditions for copying, distributing or modifying
  the Program or works based on it.

  6. Each time you redistribute the Program (or any work based on the
  Program), the recipient automatically receives a license from the
  original licensor to copy, distribute or modify the Program subject
  to these terms and conditions. You may not impose any further
  restrictions on the recipients' exercise of the rights granted
  herein. You are not responsible for enforcing compliance by third
  parties to this License.

  7. If, as a consequence of a court judgment or allegation of patent
  infringement or for any other reason (not limited to patent issues),
  conditions are imposed on you (whether by court order, agreement or
  otherwise) that contradict the conditions of this License, they do
  not excuse you from the conditions of this License. If you cannot
  distribute so as to satisfy simultaneously your obligations under
  this License and any other pertinent obligations, then as a
  consequence you may not distribute the Program at all. For example,
  if a patent license would not permit royalty-free redistribution of
  the Program by all those who receive copies directly or indirectly
  through you, then the only way you could satisfy both it and this
  License would be to refrain entirely from distribution of the
  Program.

  If any portion of this section is held invalid or unenforceable
  under any particular circumstance, the balance of the section is
  intended to apply and the section as a whole is intended to apply in
  other circumstances.

  It is not the purpose of this section to induce you to infringe any
  patents or other property right claims or to contest validity of any
  such claims; this section has the sole purpose of protecting the
  integrity of the free software distribution system, which is
  implemented by public license practices. Many people have made
  generous contributions to the wide range of software distributed
  through that system in reliance on consistent application of that
  system; it is up to the author/donor to decide if he or she is
  willing to distribute software through any other system and a
  licensee cannot impose that choice.

  This section is intended to make thoroughly clear what is believed
  to be a consequence of the rest of this License.

  8. If the distribution and/or use of the Program is restricted in
  certain countries either by patents or by copyrighted interfaces,
  the original copyright holder who places the Program under this
  License may add an explicit geographical distribution limitation
  excluding those countries, so that distribution is permitted only in
  or among countries not thus excluded. In such case, this License
  incorporates the limitation as if written in the body of this
  License.

  9. The Free Software Foundation may publish revised and/or new
  versions of the General Public License from time to time. Such new
  versions will be similar in spirit to the present version, but may
  differ in detail to address new problems or concerns.

  Each version is given a distinguishing version number. If the
  Program specifies a version number of this License which applies to
  it and "any later version", you have the option of following the
  terms and conditions either of that version or of any later version
  published by the Free Software Foundation. If the Program does not
  specify a version number of this License, you may choose any version
  ever published by the Free Software Foundation.

  10. If you wish to incorporate parts of the Program into other free
  programs whose distribution conditions are different, write to the
  author to ask for permission. For software which is copyrighted by
  the Free Software Foundation, write to the Free Software Foundation;
  we sometimes make exceptions for this. Our decision will be guided
  by the two goals of preserving the free status of all derivatives of
  our free software and of promoting the sharing and reuse of software
  generally.

                              NO WARRANTY

  11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
  WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
  OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY
  KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
  THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
  PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
  PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
  DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
  CORRECTION.

  12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
  WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
  AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU
  FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
  CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
  PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
  RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
  FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF
  SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
  SUCH DAMAGES.

                      END OF TERMS AND CONDITIONS

  ============================================================================

                           OpenContent License (OPL)
                           Version 1.0, July 14, 1998.

     This document outlines the principles underlying the OpenContent
     (OC) movement and may be redistributed provided it remains
     unaltered. For legal purposes, this document is the license under
     which OpenContent is made available for use.

     The original version of this document may be found at
     http://opencontent.org/opl.shtml

     LICENSE

     Terms and Conditions for Copying, Distributing, and Modifying

     Items other than copying, distributing, and modifying the Content
     with which this license was distributed (such as using, etc.) are
     outside the scope of this license.

     1. You may copy and distribute exact replicas of the OpenContent
     (OC) as you receive it, in any medium, provided that you
     conspicuously and appropriately publish on each copy an appropriate
     copyright notice and disclaimer of warranty; keep intact all the
     notices that refer to this License and to the absence of any
     warranty; and give any other recipients of the OC a copy of this
     License along with the OC. You may at your option charge a fee for
     the media and/or handling involved in creating a unique copy of the
     OC for use offline, you may at your option offer instructional
     support for the OC in exchange for a fee, or you may at your option
     offer warranty in exchange for a fee. You may not charge a fee for
     the OC itself. You may not charge a fee for the sole service of
     providing access to and/or use of the OC via a network (e.g. the
     Internet), whether it be via the world wide web, FTP, or any other
     method.

     2. You may modify your copy or copies of the OpenContent or any
     portion of it, thus forming works based on the Content, and
     distribute such modifications or work under the terms of Section 1
     above, provided that you also meet all of these conditions:

     a) You must cause the modified content to carry prominent notices
     stating that you changed it, the exact nature and content of the
     changes, and the date of any change.

     b) You must cause any work that you distribute or publish, that in
     whole or in part contains or is derived from the OC or any part
     thereof, to be licensed as a whole at no charge to all third
     parties under the terms of this License, unless otherwise permitted
     under applicable Fair Use law.

     These requirements apply to the modified work as a whole. If
     identifiable sections of that work are not derived from the OC, and
     can be reasonably considered independent and separate works in
     themselves, then this License, and its terms, do not apply to those
     sections when you distribute them as separate works. But when you
     distribute the same sections as part of a whole which is a work
     based on the OC, the distribution of the whole must be on the terms
     of this License, whose permissions for other licensees extend to
     the entire whole, and thus to each and every part regardless of who
     wrote it. Exceptions are made to this requirement to release
     modified works free of charge under this license only in compliance
     with Fair Use law where applicable.

     3. You are not required to accept this License, since you have not
     signed it. However, nothing else grants you permission to copy,
     distribute or modify the OC. These actions are prohibited by law if
     you do not accept this License. Therefore, by distributing or
     translating the OC, or by deriving works herefrom, you indicate
     your acceptance of this License to do so, and all its terms and
     conditions for copying, distributing or translating the OC.

     NO WARRANTY

     4. BECAUSE THE OPENCONTENT (OC) IS LICENSED FREE OF CHARGE, THERE
     IS NO WARRANTY FOR THE OC, TO THE EXTENT PERMITTED BY APPLICABLE
     LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS
     AND/OR OTHER PARTIES PROVIDE THE OC "AS IS" WITHOUT WARRANTY OF ANY
     KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
     THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
     PARTICULAR PURPOSE. THE ENTIRE RISK OF USE OF THE OC IS WITH YOU.
     SHOULD THE OC PROVE FAULTY, INACCURATE, OR OTHERWISE UNACCEPTABLE
     YOU ASSUME THE COST OF ALL NECESSARY REPAIR OR CORRECTION.

     5. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
     WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY
     MIRROR AND/OR REDISTRIBUTE THE OC AS PERMITTED ABOVE, BE LIABLE TO
     YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
     CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE
     THE OC, EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
     POSSIBILITY OF SUCH DAMAGES.




AFNI file: README.driver
       =====================================================
      *** Driving AFNI from a Plugout or a Startup Script ***
       =====================================================
An external program (i.e., a "plugout") can control some aspects of AFNI.
This functionality is invoked by passing a command line of the form

  DRIVE_AFNI command arguments ...

to AFNI (once the plugout connection is open, of course).  The commands
available are described below.  The sample plugout plugout_drive.c can
be used to test how things work (highly recommended before you start
writing your own code).

A startup Script (file ".afni.startup_script") can also give a sequence
of commands to be run immediately after AFNI starts up.  The file consists
of a sequence of command lines (without the "DRIVE_AFNI" prefix).  It is
also possible to read in a Script file using the "Datamode->Misc->Run Script"
button from the AFNI controller.  Some of the current state of AFNI can
be saved to ".afni.startup_script" using the "Datamode->Misc->Save Layout"
button (by giving a blank as the filename -- or any filename containing the
string "script" -- other filenames produce a 'layout' description which
is intended to be included in your .afnirc file).

You can also give commands to AFNI on the 'afni' command line, using the
'-com' option.  For example:

  afni -com 'OPEN_WINDOW A.axialimage'       \
       -com 'SWITCH_UNDERLAY anat'           \
       -com 'SAVE_JPEG A.axialimage sss.jpg' \
       -com 'QUIT'                           \
       somedirectory

could be used to create an image file 'sss.jpg' automatically.  The AFNI GUI
would open up (and so X11 must be running), but no user interaction would
actually occur -- the image opens and gets saved, and then AFNI just ends.
N.B.: If the 'QUIT' command weren't included above, AFNI would remain open,
ready for user interaction after the image file was saved.

==============================================================================
A programmer of a plugin can register a command string and a callback function
to be called when that command string is 'driven' to AFNI.  For example:

  static int junkfun( char *cmd )
  {
    fprintf(stderr,"junkfun('%s')\n",cmd) ; return 0 ;
  }

  AFNI_driver_register( "JUNK" , junkfun ) ;

If the callback function return value is negative, a warning message will be
printed to stderr; otherwise, the return value is ignored.  The string that
is passed to the callback function is everything AFTER the initial command
and the blank(s) that follows; in the above example, if "JUNK elvis lives"
were the driver command, then junkfun is called with the string "elvis lives".

In a plugin, the logical place to put the call to AFNI_driver_register() is
in the PLUGIN_init() function.

If you call AFNI_driver_register() with a new command name that duplicates
an existing one, then an error message is printed to stderr and this call
will be ignored.  For this reason, you may want to prefix your commands
with some identifier; for example, a hypothetical diffusion tensor analysis
plugin could give command names starting with "DTI_".  Or perhaps use your
institution's name or your initials as a prefix, as in "NIMH_" or "RWC_".

=============================================================================
COMMANDS (in no coherent order)
-------------------------------

ADD_OVERLAY_COLOR colordef colorlab
  Adds the color defined by the string "colordef" to the list of overlay
  colors.  It will appear in the menus with the label "colordef".  Example:
    ADD_OVERLAY_COLOR #ff5599 pinkish

SET_THRESHOLD [c.]val [dec]
  Sets the threshold slider for controller index 'c' (default='A') to level
  ".val" (a number between .0000 and .9999, inclusive).  If the optional
  'dec' parameter is set, this is a number between 0 and 4 (inclusive) setting
  the power-of-ten factor for the slider.  Example:
    SET_THRESHOLD A.3000 2
  will set the '**' (decimal) level of the slider to 2 and the slider value to
  30 (=0.3000*100).
  ++ You can also use "SET_FUNC_THRESH" for the command name.

SET_THRESHNEW [c] val [flags]
  Sets the threshold slider for controller index 'c' (default='A') to the
  numerical value 'val', which must be in the range [0..9999].  If the
  optional 'flags' string contains the character '*', then the slider decimal
  offset (i.e., the '**' setting) will be changed to match the size of 'val'.
  If 'flags' contains the character 'p', then 'val will be interepreted as
  a p-value (and so must be between 0.0 and 1.0).  Examples:
    SET_THRESHNEW A 9.2731
    SET_THRESHNEW B 0.3971 *p

SET_PBAR_NUMBER [c.]num
  Sets the number of panes in the color pbar to 'num' (currently must be between
  2 and 20, inclusive).  Example:
    SET_PBAR_NUMBER A.10

SET_PBAR_SIGN [c.]+ or [c.]-
  Sets the color pbar to be positive-only (+) or signed (-).  Example:
    SET_PBAR_SIGN A.+

SET_PBAR_ALL [c.]{+|-}num val=color val=color ...
  Sets all color pbar parameters at once;
  The initial string specifies the controller ('A', 'B', etc.), the sign
  condition of the pbar ('+' or '-') and the number of panes to setup.
  'num' equations of the form 'val=color' follow the initial string;
  these set up the top levels and colors of each pane.  Example:
    SET_PBAR_ALL A.+5 2.0=yellow 1.5=green 1.0=blue 0.5=red 0.2=none
  The top pane runs from 2.0-1.5 and is yellow; the second pane runs from
  1.5-1.0 and is blue, etc.  The 'color' values must be legal color labels.

SET_PBAR_ALL [c.]{+|-}99 topval colorscale_name [options]
  Sets the color pbar for controller #c to be in "continuous" colorscale
  mode.  Again, '+' or '-' is used to specify if the colorscale should
  be positive-only or signed.  The special value of 99 panes is used
  to indicate colorscale mode.  The number 'topval' tells the scale
  value to go at the top of the colorscale.  The string 'colorscale_name'
  tells which colorscale to use.  For example:
    SET_PBAR_ALL A.+99 1.0 Color_circle_AJJ

  The 'options' available at this time only apply when in this "continuous"
  colorscale case.  They are
    ROTA=n => after loading the colorscale, rotate it by 'n' steps
    FLIP   => after loading the colorscale, flip it upside down
  These options are part of how the AFNI_PBAR_LOCK function works, and
  probably aren't relevant for manual use.

PBAR_ROTATE [c.]{+|-}
  Rotates the color pbar in the positive ('+') or negative ('-') direction:
    PBAR_ROTATE A.+

DEFINE_COLORSCALE name number=color number=color ...
or DEFINE_COLORSCALE name color color color
  Defines a new colorscale with the given name.  The format of the following
  arguments is either like "1.0=#ffff00" or like "#ff00ff" (all must be in the
  same format).  See http://afni.nimh.nih.gov/afni/afni_colorscale.html for
  more information about the format of color names and about how the colorscale
  definition works.
  ++ You can also use "DEFINE_COLOR_SCALE" for the command name.

SET_FUNC_AUTORANGE [c.]{+|-}
  Sets the function "autoRange" toggle to be on ('+') or off ('-'):
    SET_FUNC_AUTORANGE A.+

SET_FUNC_RANGE [c.]value
  Sets the functional range to 'value'.  If value is 0, this turns autoRange
  on; if value is positive, this turns autoRange off:
    SET_FUNC_RANGE A.0.3333

SET_FUNC_VISIBLE [c.]{+|-}
  Turns the "See Overlay" toggle on or off:
    SET_FUNC_VISIBLE A.+
  You can also use SEE_OVERLAY for this, which is closer to the label on
  the GUI button.

SEE_OVERLAY
   Same as SET_FUNC_VISIBLE

SET_FUNC_RESAM [c.]{NN|Li|Cu|Bk}[.{NN|Li|Cu|Bk}]
  Sets the functional resampling mode:
    SET_FUNC_RESAM A.Li.Li
  sets the func and threshold resampling modes both to Linear interpolation.

OPEN_PANEL [c.]Panel_Name
  Opens the specified controller panel, where 'Panel_Name' is one of
  'Define_Overlay', 'Define_Datamode', or 'Define_Markers'.  At this time,
  there is no way to close a panel except from the GUI.

SYSTEM command string
  Executes "command string" using the system() library function; for
  example, "SYSTEM mkdir aaa".

CHDIR newdirectory
  Changes the current directory; for example, "CHDIR aaa".  This is the
  directory into which saved files (e.g., images) will be written.

RESCAN_THIS [c]
  rescans the current session directory for controller 'c', where 'c'
  is one of 'A', 'B', 'C', 'D', or 'E'.  If 'c' is absent, the 'A'
  controller's current session is scanned.

SET_SESSION [c.]directoryname
  Switches controller 'c' to be looking at the named directory.  The
  match on directory names is done by a sub-string match - that is,
  directoryname = "fred" will match an AFNI session directory named
  "wilhelm/frederick/von/guttenstein".
  ++ You can also use "SWITCH_SESSION" or "SWITCH_DIRECTORY" for the command.

SET_VIEW [c.]view
  Switches controller 'c' to the named "view", which can be one of
  'orig', 'acpc' or 'tlrc'.  The underlay dataset must have an
  appropriate transformation.

SET_ANATOMY [c.]prefix [i]
  Switches controller 'c' to be looking at the anatomical dataset with
  the given prefix.  The prefix must be a perfect match - this is NOT
  a sub-string match.
  ++ If an optional integer is given (separated by a space) after the
     prefix, this is the sub-brick index to view.
  ++ You can also use "SWITCH_ANATOMY" or "SWITCH_UNDERLAY" for the command.
  ++ The 'prefix' can also be the dataset IDcode string, if you insist.

SET_FUNCTION [c.]prefix [j [k]]
  Same, but for the functional dataset in controller 'c'.
  ++ If an optional integer is given (separated by a space) after the
     prefix, this is the sub-brick index to view as the 'OLay'; if a second
     integer is given, this is the sub-brick index to use as the 'Thr'.
  ++ You can also use "SWITCH_FUNCTION" or "SWITCH_OVERLAY" for the command.

SET_SUBBRICKS [c] i j k
  Without switching underlay or overlay datasets, change the sub-bricks
  being viewed in the viewer specified by the initial letter.
    Index i = underlay sub-brick           (grayscale)
    Index j = overlay sub-brick for 'Olay' (color)
    Index k = overlay sub-brick for 'Thr'  (threshold)
  For example, "SET_SUBBRICKS B 33 -1 44" will set the underlay sub-brick
  to 33, the threshold sub-brick to 44, and will not change the color
  sub-brick (since -1 is not a legal value).
 ++ You can also use "SET_SUB_BRICKS" for the command name.

OPEN_WINDOW [c.]windowname [options]
  Opens a window from controller 'c'.  The window name can be one of
    axialimage sagittalimage coronalimage
    axialgraph sagittalgraph coronalgraph
  If the specified controller is not yet opened, it will be opened
  first (like pressing the 'New' button).  If the command is of the
  form "OPEN_WINDOW c", then only the controller itself will be opened.
  For all windows, one allowed option is:
    geom=PxQ+R+S or geom=PxQ or geom=+R+S
    to make the window be PxQ pixels in size and located at screen
    coordinates (R,S).
  Another option for both graph and image windows is
    keypress=c       -> where 'c' is a single character to send as if
                        the user pressed that key in the specified window
                        ++ multiple keypress= options can be used, but
                           each one can only send one keystroke;
                           example: "keypress=Z keypress=Z"
                           to zoom in twice in an image viewer.
  For image windows, other options available are:
    ifrac=number     -> set image fraction in window to number (<= 1.0)
    mont=PxQ:R       -> montage P across, Q down, every R'th slice
    opacity=X        -> where X is from 0..9
    crop=x1:x2,y1:y2 -> crop images from voxels x1 to x2, and y1 to y2
                        (inclusive) -- mostly for use in .afni.startup_script;
                        use x1=x2=0 and y1=y2=0 to turn cropping off.
  For graph windows, other options available are:
    matrix=number     -> use 'number' sub-graphs (<= 21)
    pinnum=number     -> pin the graph length to 'number' time points
    pinbot=a pintop=b -> pin the graph time window to run from 'a..b'
  You can also open plugin windows with a windowname like so:
    A.plugin.buttonlabel
  where buttonlabel is the plugin's button label with blanks replaced
  by underscores or hyphens (e.g., Render_Dataset).  You can also use
  the geom=+R+S option with this type of window opening, to position
  the plugin interface window.  There is no way to control any other
  settings in the plugin interface (e.g., pre-set some fields).
  If the specified image or graph window is already open, you can still
  use this command to alter some of its properties.
  ++ You can also use "ALTER_WINDOW" for the command name, which makes
     more sense if you are using this to apply some change to an already
     open viewer window.

CLOSE_WINDOW [c.]windowname
  Closes a window from controller 'c'.  You can only close graph and image
  viewer windows this way, not plugin windows.

SAVE_JPEG [c.]windowname filename
  Save a JPEG dump of the given window to 'filename' (using the cjpeg filter,
  which must be in the path).  The windowname can be one of 'axialimage',
  'sagittalimage', 'coronalimage', 'axialgraph', 'sagittalgraph', or
  'coronalgraph'.  If the filename does not end in the string ".jpg" or
  ".JPG", then ".jpg" will be appended.
  ++ Saving is done via the cjpeg program, which must be in the path,
     and is included in the standard AFNI source and binary collections.
  ++ If the dataset has non-square voxels, then the default method of
     saving images will produce non-square pixels (as extracted from
     the dataset) -- this will make the images look peculiar when
     you open them later.  To avoid this peculiarity, set environment
     variable AFNI_IMAGE_SAVESQUARE to YES (cf. SETENV below).
     This comment applies to all image SAVE_* commands below, except
     for SAVE_RAW* (where AFNI_IMAGE_SAVESQUARE has no effect).

SAVE_PNG [c.]windowname filename
  Like SAVE_JPEG, but saves to the lossless PNG format.
  ++ Saving is done via the pnmtopng filter, which must be in the path.
     Unlike cjpeg, this program is NOT part of the AFNI collection, but
     must be installed separately (usually by getting the NETPBM package).

SAVE_FILTERED  [c.]windowname filtercommand
  Like SAVE_JPEG or SAVE_PNG, but instead of a filename, you
  give a Unix filter that processes a PPM file. For example
    SAVE_FILTERED axialimage 'pnmcut 10 20 120 240 | pnmtopng > zork.png'
  will crop the image and save it into a PNG file.  You'll need to become
  familiar with the NETPBM package if you want to use this command.
  ++ As indicated in the example, you'll need to put filtercommand
     in quotes if it contains blanks, which it almost surely will.
  Other filter examples:
     Save to a PPM file:  'cat > zork.ppm'
     Save to a TIFF file: 'ppm2tiff -c none > zork.tif'

SAVE_ALLJPEG [c].imagewindowname filename
SAVE_ALLPNG  [c].imagewindowname filename
SAVE_MPEG    [c].imagewindowname filename
SAVE_AGIF    [c].imagewindowname filename
  Save ALL the images in the given image sequence viewer (either as a
  series of JPEG/PNG files, or as one animation file).  The windowname can
  be one of 'axialimage', 'sagittalimage', or 'coronalimage'.  Do NOT
  put a suffix like '.jpg' or '.mpg' on the filename -- it will be added.
  ++ Unlike 'SAVE_JPEG', these commands do not work with graph windows.

SAVE_RAW [c.]imagewindowname filename
  Saves the raw data from the given image viewer to a file.  This data
  is the slice data extracted from the dataset, not further processed
  in any way (unlike the other SAVE_* image options, which convert the
  slice data to grayscale or colors).  This output file contains only
  the data, with no header of any sort indicating the dimensions of the
  image or the actual type of data stored therein.

SAVE_RAWMONT [c.]imagewindowname filename
  Saves the raw data from the given image viewer to a file, AS MONTAGED.
    (The montage gap is ignored.)  Same as 'SAVE_RAW' if the montage
    isn't on.

SET_DICOM_XYZ [c] x y z
SET_SPM_XYZ   [c] x y z
SET_IJK       [c] i j k
  Set the controller coordinates to the given triple; DICOM_XYZ has the
  coordinates in DICOM (RAI) order, SPM_XYZ has the coordinates in SPM
  (LPI) order, and IJK has voxel indexes instead of spatial coordinates.

SET_XHAIRS [c.]code
  Set the crosshairs ('Xhairs') control to the specified value, where
  'code' is one of the following strings:
       OFF  SINGLE  MULTI   LR_AP   LR_IS  AP_IS  LR  AP  IS

READ_NIML_FILE fname
  Reads the NIML-formatted file 'fname' from disk and processes it as if
  the data in the file had been sent to AFNI through a TCP/IP socket.

PURGE_MEMORY [dataset_prefix]
  If no prefix is given, the sub-bricks of all datasets will be purged from
  memory, and when re-used, AFNI will re-read them from disk.  If a prefix
  is given, only that dataset (in all coordinate views) will be purged.
  ++ "Locked" datasets will not be purged -- a dataset will be locked
     into memory if it can't be re-read from disk (e.g., was sent from SUMA;
     is being drawn upon, nudged, or acquired in realtime; was loaded from
     a '3dcalc()' command line call; or was fetched across the Web).

QUIT
  AFNI will exit immediately.  Communication with the dead being difficult,
  this action forestalls all further attempts to send commands to AFNI.

SETENV name value
  Set the environment variable "name" to "value" in AFNI; for example
    SETENV AFNI_CROSSHAIR_LINES YES
    SETENV AFNI_IMAGE_SAVESQUARE YES
  Most of the time, when you set an environment variable inside AFNI,
  just changing the variable won't have any immediate visible effect.
  Only when you instigate something that this variable controls will
  anything change in AFNI.  Thus, you may want to 'REDISPLAY' afterwards.

GETENV name
  Get the value of environment variable "name", and print to the terminal.
  For example:
    GETENV AFNI_PLUGINPATH
  would show the directory that plugins were loaded from, if set:
    AFNI_PLUGINPATH = /home/elvis/abin
  If a variable is not set, the output says as much:
    AFNI_PLUGINPATH = 

REDISPLAY
  Forces all images and graphs to be redrawn.

SLEEP ms
  Causes AFNI to sleep for "ms" milliseconds.  The main use would be in
  a script file to provide a pause between some effects.

QUIET_PLUGOUTS
  Turns off normal plugout communications messages

NOISY_PLUGOUTS
  Turns on normal plugout communication messages

TRACE {YES | NO}
  Turns debug tracing on or off.  Mostly for AFNI developers.


==============================================================================
** GRAPHS **
============
The following commands are used to open graph windows and manipulate them.
These commands don't actually interact with the rest of AFNI - they are
really just using AFNI as a graph display server.  [This functionality
was added per the request of Jerzy Bodurka at the NIH, to provide a way
to graph physiological signals monitored while the subject is in the
scanner, at the same time the EPI images are being sent to the AFNI
realtime plugin.]

At present there are two similar kinds of graphs:

  XY = connected (x,y) pairs - you must supply (x,y) for each new point
  1D = x increments by 1 each time, so you only give y for each new point;
       when x overflows past the right boundary, it wraps back to x=0.

Each graph can have multiple sub-graphs, which are stacked up vertically
with separate y axes and a common x axis (sub-graph #1 at the bottom, etc.).

Label strings in the graphs are interpreted in a TeX-like fashion.  In
particular, an underscore means to start a subscript and a circumflex means
to start a superscript.  Subscript or superscripts that are more than one
character long can be grouped using curly {braces}.

Greek letters and other special characters can be included using TeX-like
escapes. For example, "time (\Delta t=0.1)" might be a good label for the
x-axis of a 1D graph.  The full list of such escapes is

  \Plus      \Cross      \Diamond        \Box
  \FDiamond  \FBox       \FPlus          \FCross    \Burst    \Octagon
  \alpha     \beta       \gamma          \delta     \epsilon  \zeta
  \eta       \theta      \iota           \kappa     \lambda   \mu
  \nu        \xi         \omicron        \pi        \rho      \sigma
  \tau       \upsilon    \phi            \chi       \psi      \omega
  \Alpha     \Beta       \Gamma          \Delta     \Epsilon  \Zeta
  \Eta       \Theta      \Iota           \Kappa     \Lambda   \Mu
  \Nu        \Xi         \Omicron        \Pi        \Rho      \Sigma
  \Tau       \Upsilon    \Phi            \Chi       \Psi      \Omega
  \propto    \int        \times          \div       \approx   \partial
  \cap       \langle     \rangle         \ddagger   \pm
  \leq       \S          \hbar           \lambar
  \cup       \degree     \nabla          \downarrow
  \leftarrow \rightarrow \leftrightarrow \oint
  \in        \notin      \surd           \cents
  \bar       \exists     \geq            \forall
  \subset    \oplus      \otimes         \dagger
  \neq       \supset     \infty          \uparrow
  \{         \}          \\              \_         \?

All characters are drawn with line strokes from an internal font; standard
fonts (e.g., Helvetica) are not available.  If you want classier looking
graphs, stop whining and find another program.

--------------------------

OPEN_GRAPH_XY gname toplab xbot xtop xlab ny ybot ytop ylab nam_1 .. nam_ny
  This opens a graph window for graphing non-MRI data.  Each graph window
  has a gname string; this lets you graph into more than one window.
  Other arguments are
    toplab = string to graph at top of graph              [empty]
    xbot   = numerical minimum of x-axis in graph         [0]
    xtop   = numerical maximum of x-axis in graph         [1]
    xlab   = string to graph below x-axis                 [empty]
    ny     = number of sub-graphs (all share same x-axis) [1]
    ybot   = numerical minimum of y-axis in graph         [0]
    ytop   = numerical maximum of y-axis in graph         [1]
    ylab   = string to graph to left of y-axis            [empty]
    nam_1  = name to plot at right of sub-graph 1, etc.   [not plotted]
  Arguments are separated by spaces.  If a label has a space in it, you can
  put the label inside "double" or 'single' quote characters.  If you don't
  want a particular label plotted, make it the empty string "" or ''.  If you
  don't want names plotted at the right of sub-graphs, stop the arguments at
  ylab.  Only the gname argument is strictly required - the other arguments
  have default values, which are given in [brackets] above.

CLOSE_GRAPH_XY gname
  Closes the graph window with the given name.

CLEAR_GRAPH_XY gname
  Clears the graph out of the given window (leaves the axes and labels).

ADDTO_GRAPH_XY gname x y_1 y_2 .. y_ny [repeat]
  Actually plots data into the given window.  In the i-th sub-graph, a line
  will be drawn connecting to (x,y_i), for i=1..ny.  You can put many sets
  of points on the line, subject to the limitation that a plugout command
  line cannot contain more than 64 Kbytes.

--------------------------

OPEN_GRAPH_1D gname toplab nx dx xlab ny ybot ytop ylab nam_1 .. nam_ny
  Opens a graph window that is set up to plot nx points across with spacing dx,
  in ny separate sub-graphs.  When the graph is full, the graph recycles back
  to the beginning.  The meaning and [default] values of parameters are:
    toplab = string to graph at top of graph              [empty]
    nx     = number of points along the x-axis            [500]
    dx     = spacing between x-axis points                [1]
    xlab   = string to graph below x-axis                 [empty]
    ny     = number of sub-graphs (all share same x-axis) [1]
    ybot   = numerical minimum of y-axis in graph         [0]
    ytop   = numerical maximum of y-axis in graph         [1]
    ylab   = string to graph to left of y-axis            [empty]
    nam_1  = name to plot at right of sub-graph 1, etc.   [not plotted]

CLOSE_GRAPH_1D gname
  Closes the graph window with the given name.

CLEAR_GRAPH_1D gname
  Clears the graph out of the given window (leaves the axes and labels).

ADDTO_GRAPH_1D gname y_1 y_2 .. y_ny [repeat]
  Actually plots data into the given window.  You can put many sets of ny
  values at a time on the command line, subject to the limitation that a
  plugout command line cannot contain more than 64 Kbytes.  Also, if you
  put more than nx sets of values, only the first nx will be plotted, since
  that will fill up the graph through one full cycle.

--------------------------

SET_GRAPH_GEOM gname geom=X-geometry-string
  This lets you move/resize a graph (1D or XY).  X-geometry-string is one
  of the forms:
    300x100         = set window size to 300 pixels wide, 100 high
    +50+90          = set window location to 50 pixels across, 90 down
    300x100+50+90   = set window size and location at the same time



AFNI file: README.environment
Unix Environment Variables Used by AFNI
=======================================
The AFNI program allows you to use several Unix environment variables
to influence its behavior.  The mechanics of setting an environment
variable depend on which shell you are using.  To set an environment
variable named "FRED" to the string "Elvis":

   csh or tcsh:  setenv FRED Elvis
   bash or ksh:  FRED=Elvis ; export FRED

Normally, these commands would go in your .cshrc or .profile files,
so that they would be invoked when you login.  If in doubt, consult
your local Unix guru.  If you don't have one, well....

You don't NEED to set any of these variables -- AFNI will still work
correctly.  But they are an easy way to set up certain defaults to
make AFNI a little easier on your neocortex and hippocampus.

N.B.: Changes to environment variables AFTER you start a program will
      not be seen by that program, since each running program gets
      a private copy of the entire set of environment variables when
      it starts.  This is a standard Unix feature, and is not specific
      to AFNI.  Some variables can be set internally in AFNI using
      the "Edit Environment" control from the "Datamode->Misc" menu
      or from the image window Button-3 popup menu.  Such variables
      are marked with "(editable)" in the descriptions below.

N.B.: Some variables below are described as being of "YES/NO" type.
      This means that they should either be set to the value "YES"
      or to the value "NO".

N.B.: You can now set environment variables on the 'afni' command
      line; for example:
        afni -DAFNI_EDGIZE_OVERLAY=YES -DAFNI_SESSTRAIL=3
      This may be useful for a 'one time' situation, or as an alias.
      You can also use this '-Dname=val' option in 1dplot and 3dDeconvolve.
      -- RWCox - 22 Mar 2005
      And now you can use this feature on most program command lines.
      -- RWCox - 13 Dec 2007

N.B.: At the end of this file is a list of several environment variables
      that affect the program 3dDeconvolve, rather than the interactive
      AFNI program itself.

N.B.: If you set an AFNI environment variable on the command line, or
      in a shell startup file (e.g., ~/.cshrc), and also have that
      variable in your ~/.afnirc file, you will get a warning telling
      you that the value in the ~/.afnirc file is being ignored.
      To turn off these warnings, set environment variable
      AFNI_ENVIRON_WARNINGS to NO.

********************************************************
June 1999: Setting environment variables in file .afnirc
********************************************************
You can now set environment variables for an interactive AFNI run in the
setup (.afnirc) file.  This is provided as a convenience.  An example:

***ENVIRONMENT
  AFNI_HINTS = YES
  AFNI_SESSTRAIL = 3

Note that the spaces around the "=" sign are required.  See README.setup
for more information about the possible contents of .afnirc besides the
environment variables.

A few other programs in the AFNI package also read the ***ENVIRONMENT
section of the .afnirc file.  This is needed so that environment settings
that affect those programs (e.g., AFNI_COMPRESSOR for auto-compression of
output datasets) can be properly initialized in .afnirc.

At the same time, the routine in AFNI that initializes certain internal
constants from X11 resources (usually in your .Xdefaults or .Xresources
file, and described in file AFNI.Xdefaults) has been modified to also
allow the same constants to be set from Unix environment variables.
For example, the gap (in pixels) between sub-graphs is set by the
X11 resource "AFNI*graph_ggap", and can now be set by the environment
variables "AFNI_graph_ggap" or "AFNI_GRAPH_GGAP", as in

  AFNI_graph_ggap = 6   // this is a comment

If an X11 resource is actually set, it will take priority over the
environment variable.  Some of the variables that can be set in this
way are:

 AFNI_ncolors             = number of gray levels to use
 AFNI_gamma               = gamma correction for image intensities
 AFNI_graph_boxes_thick   = 0=thin lines, 1=thick lines, for graph boxes
 AFNI_graph_grid_thick    = ditto for the graph vertical grid lines
 AFNI_graph_data_thick    = ditto for the data graphs
 AFNI_graph_ideal_thick   = ditto for the ideal graphs
 AFNI_graph_ort_thick     = ditto for the ort graphs
 AFNI_graph_dplot_thick   = ditto for the dplot graphs
 AFNI_graph_ggap          = initial spacing between graph boxes
 AFNI_graph_matrix        = initial number of sub-graphs
 AFNI_fim_polort          = polynomial detrending order for FIM
 AFNI_fim_ignore          = how many pts to ignore at start when doing FIM
 AFNI_montage_periodic    = True allows periodic montage wraparound
 AFNI_purge               = True allows automatic dataset memory purge
 AFNI_resam_vox           = dimension of voxel (mm) for resampled datasets
 AFNI_resam_anat          = One of NN, Li, Cu, Bk for Anat resampling mode
 AFNI_resam_func          = ditto for Func resampling mode
 AFNI_resam_thr           = ditto for Threshold resampling mode
 AFNI_pbar_posfunc        = True will start color pbar as all positive
 AFNI_pbar_sgn_pane_count = # of panes to start signed color pbar with
 AFNI_pbar_pos_pane_count = # of panes to start positive color pbar with

Some other such variables are described in file AFNI.Xdefaults.  Note that
values that actually affect the way the X11/Motif interface appears, such as
AFNI*troughColor, must be set via the X11 mechanism and cannot be set using
Unix environment variables.  This is because they are interpreted by the
Motif graphics library when it starts and not by any actual AFNI code.

The following example is from my own .afnirc file on the Linux system on
which I do most of the AFNI development.  The first ones (in lower case)
are described in AFNI.Xdefaults.  The later ones (all upper case) are
documented in this file.  (You can tell from this file that I like to
have things line up.  You would never be able to tell this from the
piles of paper in my office, though.)

 ***ENVIRONMENT

 AFNI_ncolors             = 60      // number of gray levels
 AFNI_gamma               = 1.5     // adjust for proper display
 AFNI_purge               = True    // purge datasets from memory when not used
 AFNI_chooser_doubleclick = Apply   // like Apply button; could also be Set
 AFNI_chooser_listmax     = 25      // max nonscrolling items in chooser lists
 AFNI_graph_width         = 512     // initial width of graph window (pixels)
 AFNI_graph_height        = 384     // initial height of graph window
 AFNI_graph_data_thick    = 1       // graph time series with thick lines
 AFNI_fim_ignore          = 2       // default value for FIM ignore
 AFNI_graph_ggap          = 7       // gap between sub-graphs (pixels)
 AFNI_pbar_hide           = True    // hide color pbar when it changes size
 AFNI_hotcolor            = Violet  // color to use on Done and Set buttons
 AFNI_SESSTRAIL           = 2       // see below for these ...
 AFNI_RENDER_ANGLE_DELTA  = 4.0     //                       |
 AFNI_RENDER_CUTOUT_DELTA = 4.0     //                       |
 AFNI_FIM_BKTHR           = 25.0    //                       |
 AFNI_SPLASHTIME          = 3.0     //                       v

------------------------------------
Variable: AFNI_DONT_SORT_ENVIRONMENT
------------------------------------
If this YES/NO variable is YES, then the Edit Environment controls will
NOT be sorted alphabetically.  The default action is to sort them
alphabetically.  If they are unsorted, the editable environment variables
will appear in the control panel in the order in which they were added to
the code (that is, in an order that makes no real sense).

---------------------
Variable: AFNI_ORIENT (editable)
---------------------
This is a string used to control the display of coordinates in the AFNI
main control window.  The string must be 3 letters, one each from the
pairs {R,L} {A,P} {I,S}.  The first letter in the string gives the
orientation of the x-axis, the second the orientation of the y-axis,
the third the z-axis:

   R = right-to-left           L = left-to-right
   A = anterior-to-posterior   P = posterior-to-anterior
   I = inferior-to-superior    S = superior-to-inferior

If AFNI_ORIENT is undefined, the default is RAI.  This is the order
used by DICOM, and means

   the -x axis is Right,    the +x axis is Left,
   the -y axis is Anterior, the +y axis is Posterior,
   the -z axis is Inferior, the +z axis is Superior.

As a special case, using the code 'flipped' is equivalent to 'LPI',
which is the orientation used in many neuroscience journals.

This variable is also recognized by program 3dclust, which will report
the cluster coordinates in the (x,y,z) order given by AFNI_ORIENT.
Both AFNI and 3dclust also recognize the command line switch
"-orient string", where string is a 3 letter code that can be used
to override the value of AFNI_ORIENT.

The plugin "Coord Order" (plug_coord.c) allows you to interactively
change the orientation of the variable display within AFNI.

-------------------------
Variable: AFNI_PLUGINPATH
-------------------------
This variable should be the directory in which AFNI should search
for plugins.  If there is more than one appropriate directory, they
can be separated by colons, as in

   setenv AFNI_PLUGINPATH /directory/one:/directory/two

If this variable is not set, then AFNI will use the PATH variable
instead.  This will waste time, since most directories in the PATH
will not have plugins.  On some systems, using the PATH has been
known to cause problems when AFNI starts.  I believe this is due to
bugs in the system library routines (e.g., dlopen) used to manage
dynamically loaded shared objects.

------------------------
Variable: AFNI_NOPLUGINS
------------------------
If this YES/NO variable is set to YES, then AFNI will not try to
read plugins when it starts up.  The command line switch "-noplugins"
will have the same effect.

--------------------------
Variable: AFNI_YESPLUGOUTS
--------------------------
If this YES/NO variable is set to YES, then AFNI will try to listen
for plugouts when it starts.  The command line switch "-yesplugouts"
will have the same effect.  (Plugouts are an experimental feature
that allow external programs to exchange data with AFNI.)  It is now
also possible to start plugout listening from the Datamode->Misc menu.

---------------------
Variable: AFNI_TSPATH
---------------------
This variable should be set to any directory which you want to have
AFNI scan for timeseries files (*.1D -- see the AFNI manual).  If
more than one directory is desired, then colons can be used to
separate them, as in AFNI_PLUGINPATH.  Note that timeseries files
are read from all session directories, so directories provided by
AFNI_TSPATH are designed to contain extra timeseries files that
you want loaded no matter what AFNI sessions and datasets are being
viewed.

------------------------
Variable: AFNI_MODELPATH
------------------------
This variable should be set to the directory from which you want AFNI
timeseries models to be loaded.  These models are similar to plugins,
and are used by programs 3dNLfim, 3dTSgen, and the plugin plug_nlfit
(menu label "NLfit & NLerr") -- see documentation file 3dNLfim.ps.
If AFNI_MODELPATH is not given, then AFNI_PLUGINPATH will be used
instead.

-----------------------------------------
Variable: AFNI_IMSIZE_* (or MCW_IMSIZE_*)
-----------------------------------------
These variables (named AFNI_IMSIZE_1 to AFNI_IMSIZE_99) allow you
to control how the AFNI programs read binary image files.  The use of
these is somewhat complicated, and is explained in detail at the end
of the auxiliary programs manual (afni_aux.ps), in the section on "3D:"
file specifications, and is also explained in the AFNI FAQ list.

------------------------
Variable: AFNI_SESSTRAIL (editable)
------------------------
This variable controls the number of directory levels shown when
choosing between session directories with the "Switch Session"
button.  This variable should be set to a nonnegative integer.
If a session directory name were
   this/is/a/directory/name/
then the "Switch Session" chooser would display the following:

   AFNI_SESSTRAIL    Display
   --------------    -------
            0        name/
            1        directory/name/
            2        a/directory/name/
            3        is/a/directory/name/
            4        this/is/a/directory/name/

That is, AFNI_SESSTRAIL determines how many trailing levels of
the directory name are used for the display.  If AFNI_SESSTRAIL
is not set, then it is equivalent to setting it to 0 (which
was the old method).

--------------------
Variable: AFNI_HINTS
--------------------
This is a string controlling whether or not the popup "hints" are
displayed when AFNI starts.  If the string is "NO", then the hints
are disabled when AFNI starts, otherwise they are enabled.  In
either case, they can be turned off and on interactively from the
Define Datamode->Misc menu.

Hints can be permanently disabled by setting the C macro
DONT_USE_HINTS in machdep.h and recompiling AFNI.  They can also
be disabled at runtime by setting AFNI_HINTS to "KILL".

-------------------------
Variable: AFNI_COMPRESSOR (cf. AFNI_AUTOGZIP) (editable)
-------------------------
This variable is used to control automatic compression of .BRIK files on
output.  The legal values are "COMPRESS", "GZIP", and "BZIP2", which
respectively invoke programs "compress", "gzip", and "bzip2" (the program
must be in your path for compression to work).  If AFNI_COMPRESSOR is
equal to one of these, then all AFNI programs will automatically pass
.BRIK data through the appropriate compression program as it is written
to disk.  Note that this will slow down dataset write operations.  Note
also that compressed datasets cannot be mapped directly from disk into
memory ('mmap'), but must occupy actual memory (RAM) and swap space.  For
more details, see file README.compression.

Note that compressed (.BRIK.Z, .BRIK.gz, and .BRIK.bz2) datasets will
automatically be uncompressed on input, no matter what the setting of
this variable.  AFNI_COMPRESSOR only controls how the datasets are
written.

------------------------
Variable: AFNI_BYTEORDER
------------------------
This variable is used to control the byte order for output files.
If you use it, the two legal values are "LSB_FIRST" and "MSB_FIRST".
If you don't use it, the default order on your CPU will be used.
The main purpose of this would be if you were using a mixture of
CPU types reading shared disks (i.e., using NFS).  If the majority
of the systems were MSB_FIRST (e.g., SGI, HP, Sun), but there were
a few LSB_FIRST systems (e.g., Intel, DEC Alpha), then you might
want to do 'setenv AFNI_BYTEORDER MSB_FIRST' on all of the MSB_FIRST
systems to make sure that the datasets that they write out are
readable by the other computers.

Note that AFNI programs can now check the .HEAD file for the byte
order of a dataset, and will swap the bytes on input, if needed.
If you wish to mark all of the datasets on a given system as
being in a particular order, the following command should work:

 find /top/dir -name \*.HEAD -exec 3drefit -byteorder NATIVE_ORDER {} \;

Here, '/top/dir' is the name of the top level directory under
which you wish to search for AFNI datasets.  The string NATIVE_ORDER
means to set all datasets to the CPU default order, which is probably
what you are using now.  (You can use the program 'byteorder' to
find out the native byte ordering of your CPU.)

------------------------------
Variable: AFNI_BYTEORDER_INPUT
------------------------------
This variable is used to control the byte order for input files.
If you use it, the two legal values are "LSB_FIRST" and "MSB_FIRST".
The value of this variable is only used for old datasets that do
not have the byte order encoded in their headers.  If this variable
is not present, then the CPU native byte order is used.  If this
variable is present, and its value differs from the native byte
order, then 2 byte dataset BRIKs (short) are 2-swapped (as in
"ab" -> "ba") when they are read from disk, and 4 byte datasets
(float, complex) are 4-swapped ("abcd" -> "dcba").

[per the request of John Koger]

---------------------
Variable: AFNI_NOMMAP
---------------------
This YES/NO variable can be used to turn off the mmap feature by which
AFNI can load datasets into memory using the map-file-to-memory
functionality of Unix.  (Dataset .BRIK files will only be mmap-ed if
they are not compressed and are in the native byte order of the CPU.)
On some systems, mmap doesn't seem to work very well (e.g., Linux kernel
version 1.2.13).  You can disable mmap by 'setenv AFNI_NOMMAP YES'.

The penalty for disabling mmap is that all datasets must be loaded
into actual RAM.  AFNI does not have the ability to load a dataset
only partially, so if a 20 Megabyte .BRIK file is accessed, all of
it will be loaded into RAM.  With mmap, the Unix operating system will
decide how much of the file to load.  In this way, it is possible to
deal with more files than you have swap space on your computer
(since .BRIK files are mmap-ed in readonly mode).

The moral of the story: buy more memory, it's cheap.  At the time
I write this line [Aug 1998], I have a PC with 384 MB of RAM, and
it is great to use with AFNI.

[Feb 2004] I now have a Mac G5 with 8 GB of RAM, and it is even greater!

----------------------
Variable: AFNI_PSPRINT (editable)
----------------------
This variable is used to define a command that will print the
standard input (stdin) to a PostScript printer.  If it is defined,
the "->printer" button on the timeseries "Plot" windows will work.
For some Unix systems, the following should work:
  setenv AFNI_PSPRINT "lp -"
For others, this may work
  setenv AFNI_PSPRINT "lpr -"
It all depends on the printer software setup you have.  To send the
output into GhostView
  setenv AFNI_PSPRINT "ghostview -landscape -"

In the (very far distant) future, other windows (e.g., image and graph
displays) may get the ability to print to a PostScript file or printer.

---------------------------
Variable: AFNI_LEFT_IS_LEFT (editable)
---------------------------
Setting this YES/NO variable to YES tells AFNI to display images with
the left side of the subject on the left side of the window.  The default
mode is to display the right side of the subject on the left side of
the window - the radiology convention.  This "handedness" can also be
controlled with the "-flipim" and "-noflipim" command line options to AFNI.

--------------------------
Variable: AFNI_ALWAYS_LOCK
--------------------------
Setting this YES/NO variable to YES tells AFNI to start up with all
the controller windows locked together.  If you mostly use multiple
controllers to view datasets in unison, then this will be useful.
Notice that the Time Lock feature is not automatically enabled
by this -- you must still actuate it manually from the Lock menu
on the Define Datamode panel.

------------------------
Variables: AFNI_RENDER_* (editable)
------------------------
These variables set some defaults in the "Render Dataset" (volume
rendering) plugin.  The first two variables are

  AFNI_RENDER_ANGLE_DELTA  = stepsize for viewing angles, in degrees
  AFNI_RENDER_CUTOUT_DELTA = stepsize for cutout dimensions, in mm

These stepsizes control how much the control parameters change when
one of their up- or down-arrows is pressed.  Both of these stepsize
values default to 5.0.

The third variable is

  AFNI_RENDER_PRECALC_MODE = "Low", "Medium", or "High"

This is used to set the initial precalculation mode for the renderer
(this mode can be altered interactively, unlike the stepsizes).

The fourth variable is

  AFNI_RENDER_SHOWTHRU_FAC = some number between 0.0 and 1.0

This is used to control the way in which the "ShowThru" Color Opacity
option renders images.  See the rendering plugin Help window for more
information.

-------------------------
Variable: AFNI_NOREALPATH
-------------------------
Normally, when AFNI reads a list of session directories, it converts
their names to the "real path" form, which follows symbolic links, and
removes '/./' and '/../' components.  These converted names are used
for display purposes in the "Switch Session" chooser and in other
places.  If you wish to have the names NOT converted to the "real path"
format, set this YES/NO environment variable to YES, as in

   setenv AFNI_NOREALPATH YES

(For more information on the "real path" conversion, see the Unix
man page for the realpath() function.)  Note that if you use this
feature, then the effect of AFNI_SESSTRAIL will be limited to what
you type on the command line, since it is the realpath() function
that provides the higher level hierarchies of the session names.

----------------------------
Variable: AFNI_NO_MCW_MALLOC
----------------------------
AFNI uses a set of "wrapper" macros and functions to let itself keep
track of the memory allocated and freed by the C malloc() library.
This is useful for debugging purposes (see the last items on the 'Misc'
menu in the AFNI 'Define Datamode' control panel), but carries a small
overhead (both in memory and speed).  Setting this YES/NO environment
variable to YES provides one way to disable this facility, as in

   setenv AFNI_NO_MCW_MALLOC YES

Another way to permanently disable this capability (so that it isn't
even compiled) is outlined in the file machdep.h.  Also, the interactive
AFNI program takes the command line switch "-nomall", which will turn
off these functions for the given run.

N.B.: Setting this variable in the .afnirc file will have no effect,
      since the decision whether to use the routines in mcw_malloc.c
      is made at the very start of the program, before .afnirc is
      scanned.  Therefore, to use this variable, you must set it
      externally, perhaps in your .cshrc or .profile initialization
      file.

------------------------
Variable: AFNI_FIM_BKTHR
------------------------
This sets the threshold for the elimination of the background voxels
during the interactive FIM calculations.  The average intensity of
all voxels in the first 3D volume used in the correlation is calculated.
Voxels with intensity below 0.01 * AFNI_FIM_BKTHR * (this average)
will not have the correlation computed.  The default value is 10.0, but
values as large as 50.0 may be useful.  This parameter may be changed
interactively from the FIM->Edit Ideal submenu in a graph viewer.

------------------------
Variable: AFNI_FLOATSCAN (editable)
------------------------
If this YES/NO variable is set to YES, then floating point bricks
are checked for illegal values (NaN and Infinity) when they are
read into an AFNI program -- illegal values will be replaced by
zeros.  If a dataset brick contains such illegal values that go
undetected, AFNI programs will probably fail miserably, and have
been known to go into nearly-infinite loops.

Setting this variable implies setting AFNI_NOMMAP to YES, since
only in-memory bricks can be altered (mmap-ed bricks are readonly).

The command line program 'float_scan' can be used to check and
patch floating point files.

[14 Sep 1999] The program to3d will scan input float and complex
files for illegal values, and patch illegal input numbers with
zeros in the output dataset.  If this behavior is not desired for
some bizarre reason, the '-nofloatscan' command line option to
to3d must be used.

-----------------------
Variable: AFNI_NOSPLASH
-----------------------
If this YES/NO variable is set to YES, then the AFNI splash screen
will not be displayed when the program starts.  I'm not sure WHY
you would want to disable this thing of beauty (which is a joy
forever), but if your soul is thusly degraded, so be it.

------------------------
Variable: AFNI_SPLASH_XY
------------------------
If set, this variable should be in the form "100:37" (two integers
separated by a colon).  These values specify the (x,y) screen location
where the splash window's upper left corner will be placed.  If not
set, x will be set to center the splash window on the display and
y will be 100.

-------------------------
Variable: AFNI_SPLASHTIME
-------------------------
The value of this variable determines how long the AFNI splash screen
will stay popped up, in seconds (default value = 5.0).  The splash
screen will always stay up until the first AFNI controller window is
ready for use.  If the time from program start to this ready condition
is less than AFNI_SPLASHTIME, the splash screen will stay up until
AFNI_SPLASHTIME has elapsed; otherwise, the splash screen will be
removed as soon as AFNI is ready to go.  By setting AFNI_SPLASHTIME
to 0.0, you can have the splash screen removed as soon as possible
(and the fade-out feature will be disabled).

-----------------------------
Variable: AFNI_SPLASH_ANIMATE
-----------------------------
If this variable is NO, then the splash screen animation will be disabled.
Otherwise, it will run.

--------------------------------
Variable: AFNI_FIM_PERCENT_LIMIT (editable)
--------------------------------
This sets an upper limit on the % Change that the FIM+ computation
will compute.  For example

  setenv AFNI_FIM_PERCENT_LIMIT 50

means that computed values over 50% will be set to 50%, and values
below -50% will be set to -50%.  This can be useful to avoid scaling
problems that arise when some spurious voxels with tiny baselines have
huge percent changes.  This limit applies to all 3 possible percentages
that FIM and FIM+ can compute: % from baseline, % from average, and
% from top.

---------------------------
Variable: AFNI_NOTES_DLINES
---------------------------
This sets the upper limit on the number of lines displayed in the
Notes plugin, for each note.  If not present, the limit is 9 lines
shown per note at once.  To see a note longer than this limit, you'll
have to use the vertical scrollbar.

-----------------------
Variable: AFNI_FIM_MASK
-----------------------
This chooses the default subset of values computed with the FIM+
button in a graph window.  The mask should be the sum of the desired
values from this list:

    1 = Fit Coef
    2 = Best Index
    4 = % Change
    8 = Baseline
   16 = Correlation
   32 = % From Ave
   64 = Average
  128 = % From Top
  256 = Topline
  512 = Sigma Resid

If you don't set this variable, the default mask is 23 = 1+2+4+16.

-----------------------------------
Variable: AFNI_NO_BYTEORDER_WARNING
-----------------------------------
If this YES/NO variable is set to YES, then AFNI program will not
warn you when reading in a dataset that does not contain a byte
order flag.  The default is to issue such a warning.  Only older
versions of AFNI create datasets that don't have the byte order
flag.  (See also the variable AFNI_BYTEORDER, described far above.)
The purpose of this warning is to alert you to possible problems
when you move datasets between computers with different CPU types.

--------------------------
Variable: AFNI_PCOR_DENEPS
--------------------------
The correlation coefficient calculated in FIM is calculated as the
ratio of two quantities.  If the denominator is negative or zero,
then this value is meaningless and may even cause the program to
crash.  Mathematically, the denominator cannot be zero or negative,
but this could arise due to finite precision arithmetic on the computer
(i.e., roundoff error accumulation).  To avoid this problem, the routine
that computes the correlation coefficient compares the denominator to a
value (called DENEPS) - if the denominator is less than DENEPS, then
the correlation coefficient for that voxel is set to zero.

The denominator that is being computed is proportional to the variance
of the time series.  If the voxel time series data is very small, then
the variance will be really small - so much so that the DENEPS test
will be failed, even though it shouldn't be.  This problem has arisen
when people input time series whose typical value is 0.001 or smaller.
It never occurred to me that people would input data this small to the
AFNI FIM routines.  To get around this difficulty, set this environment
variable to a value for DENEPS; for example
  setenv AFNI_PCOR_DENEPS 0.0
will turn off the checking entirely.  Or you could do
  setenv AFNI_PCOR_DENEPS 1.e-10

-----------------------------
Variable: AFNI_ENFORCE_ASPECT (editable)
-----------------------------
Some Linux window managers do no enforce the aspect ratio (width to height
proportion) request that the image display module makes.  This means that
image windows can become undesirably distorted when manually resized.
Setting this YES/NO variable to YES will make AFNI itself enforce the
aspect ratio whenever an image window is resized.

----------------------------------------
Variables: AFNI__butcolor
----------------------------------------
These variables (one for each AFNI plugin) let you set the menu button colors
for the Plugins menu item.  For example
  setenv AFNI_plug_power_butcolor red3
will make the "Power Spectrum" button appear in a dark red color.  The format
of the variable is exemplified above: the  is replaced by the
filename of the plugin (after removing the suffix).  Note that it is possible
for the plugin author to hardcode the menu button for his/her plugin, in
which case the corresponding environment variable will have no effect.

Colors are specified as described in file README.setup.  If you are using
an X11 PseudoColor visual, then you should be economical with color usage!

The purpose of this feature is to let you highlight the plugins that you
use most frequently.  The size of the of plugin menu is growing, and it
is easy to misplace what you most use in the list.

-----------------------------
Variable: AFNI_MARKERS_NOQUAL (editable)
-----------------------------
If this YES/NO variable is set to YES, then the interactive AFNI program
behaves as if the "-noqual" command line option had been included.  This
feature was added at the request of Dr. Michael S. Beauchamp, who has a
very rare neurological disorder called "noqaulagnosia".

----------------------
Variable: AFNI_OPTIONS
----------------------
In the spirit of the previous variable, this variable can be used to set
up command line options that will always be passed to the interactive
AFNI program.  If more than one option is needed, then they should be
separated by spaces, and the whole value of the variable will need to be
placed in quotes.  For example

   setenv AFNI_OPTIONS "-noqual -ncolors 60"

Note that the AFNI command line option "-nomall" cannot be specified this
way (cf. the discussion under variable AFNI_NO_MCW_MALLOC).

------------------------------
Variable: AFNI_NO_SIDES_LABELS (editable)
------------------------------
As of 01 Dec 1999, the interactive AFNI program now displays a label
beneath each image window showing which side of the image is on the left
edge of the window.  This label is based on the anatomical directions
encoded in the anatomical dataset .HEAD file, usually when to3d was used
to create the file.  If you do NOT want these labels displayed (why not?),
set this YES/NO environment variable to YES.

----------------------------------
Variable: AFNI_NO_ADOPTION_WARNING
----------------------------------
AFNI now can print a warning when it forces a dataset to have an anatomy
parent dataset (the "forced adoption" function).  This happens when
there a dataset does not have an anatomy parent encoded into its .HEAD
file (either via to3d or 3drefit), and there is more than one anatomical
dataset in the directory that has Talairach transformation markers
attached.  If you wish to enable this warning, set this YES/NO variable
to NO.  For more information on this subject, please see
  http://afni.nimh.nih.gov/afni/afni_faq.shtml#AnatParent .

-----------------------------------
Variable: AFNI_NO_NEGATIVES_WARNING
-----------------------------------
If this YES/NO variable is set to YES, then to3d will skip the usual
warning that it pops up in a message window when it discovers negative
values in the input short images.  (The warning will still be printed
to stdout.)

-----------------------------------
Variable: AFNI_NO_OBLIQUE_WARNING
-----------------------------------
If this YES/NO variable is set to YES, then the AFNI GUI will skip the usual
warning that it pops up in a message window when an oblique dataset is selected.
(The warning will still be printed to stdout.)

----------------------
Variable: AFNI_NO_XDBE
----------------------
If this YES/NO variable is set to YES, then the X11 Double Buffer
Extension (XDBE) will not be used, even if the X11 server supports it.
This is needed when the X11 server says that it supports it, but actually
does not implement it correctly - this is a problem on the Xsgi server
running under IRIX 6.5.3 on R4400 machines.

------------------------------
Variable: AFNI_VIEW_ANAT_BRICK (editable)
          AFNI_VIEW_FUNC_BRICK (editable)
------------------------------
One of the (very few) confusing parts of AFNI is the "warp-on-demand"
viewing of transformed datasets (e.g., in the +tlrc coordinate system).
This allows you to look at slices taken from transformed volumes without
actually computing and storing the entire transformed dataset.  This
viewing mode is controlled by from the "Define Datamode" control panel.
When an anatomical dataset has a +tlrc.BRIK file, then you can choose
between "View Anat Data Brick" and "Warp Anat on Demand"; when there
is no +tlrc.BRIK file for the dataset, then only "Warp Anat on Demand"
is possible.

If you switch the Talairach view when the current anat dataset does
not have a +tlrc.BRIK file, then the "Warp Anat on Demand" mode will
be turned on.  If you then switch to a dataset that does have a
+tlrc.BRIK file, "Warp Anat on Demand" will still be turned on,
although the "View Anat Data Brick" option will be enabled.

If you set the YES/NO variable AFNI_VIEW_ANAT_BRICK to YES,
then "View Anat Data Brick" will be turned on whenever possible after
switching datasets.  Similarly, setting AFNI_VIEW_FUNC_BRICK to YES
will engage "View Func Data Brick" whenever possible (when the BRIK
file exists and its grid spacing matches the anatomical grid spacing).
Note that switching any dataset (func or anat) triggers the same
routine, and will set either or both "View Brick" modes on.  When
these environment variables are present, the only way to switch to
"Warp" mode when "View Brick" mode is possible is to do it manually
(by clicking on the toggle button) when you want this.

When you use one of the drawing plugins ("Draw Dataset" or "Gyrus Finder"),
you must operate directly on the dataset BRIK.  For this reason, it is
important to be in "View Data Brick" mode on these occasions.  Setting
these variables is one way to ensure that this will happen whenever
possible.

When AFNI is in "Warp Anat on Demand" mode, the word "{warp}" will
appear in the windows' titlebars.  This provides a reminder of the
viewing mode you are using (warped from a brick, or data directly
extracted from a brick), since the "Define Datamode" control panel
will not always be open.

08 Aug 2003: I have modified the way these variables are treated in
AFNI so that they now default to the "YES" behavior.  If you don't
want this, you have to explicitly set them to "NO" from this day forth.

----------------
Variable: TMPDIR
----------------
This variable specifies the directory where temporary files are to be
written.  It is not unique to AFNI, but is used by many Unix programs.
You must have permission to write into this directory.  If you want to
use the current directory, setting TMPDIR to "." would work.  If TMPDIR
is not defined, directory /tmp will be used.  On some systems, this
directory may not have enough space for the creation of large temporary
datasets.  On most Unix systems, you can tell the size of various disk
partitions using a command like "df -k" (on HPUX, "bdf" works).

----------------------------
Variable: AFNI_GRAYSCALE_BOT
----------------------------
This variable sets the darkest level shown in a grayscale image window.
The default value is 55 (a leftover from Andrzej Jesmanowicz).  You can
set this value to anything from 0 to 254.

----------------------------
Variable: AFNI_SYSTEM_AFNIRC
----------------------------
If this variable is set, it is the name of a file to be read like the
user's .afnirc file (see README.setup).  The purpose is to allow a
system-wide setup file to be used.  To do this, you would create such
a file in a useful place - perhaps where you store the AFNI binaries.
Then each user account should have the equivalent of

   setenv AFNI_SYSTEM_AFNIRC /place/where/setup/is/stored/.afnirc

defined in its .cshrc (.bashrc, etc.) file.  Note that it doesn't make
sense to define this variable in the user's .afnirc file, since that
file won't be read until AFTER this file is read.  Also note that use
of the -skip_afnirc option will cause both the system and user setup
files to be skipped.

------------------------
Variable: AFNI_PBAR_IMXY (editable)
------------------------
This variable determines the size of the image saved when the
"Save to PPM" button is selected for a color pbar.  It should be
in the format
  setenv AFNI_PBAR_IMXY 20x256
which means to set the x-size (horizontal) to 20 pixels and the
y-size (vertical) to 256 pixels.  These values are the default,
by the way.

--------------------------
Variable: AFNI_LAYOUT_FILE
--------------------------
If defined, this variable is the name of a file to read at startup
to define the "layout" of AFNI windows at the program start.  If
this name starts with a '/' character, then it is an absolute path;
otherwise, it is taken to be a path relative to the user's home
directory ($HOME).  If the AFNI command line switch "-layout" is
used, it will override this specification.

The simplest way to produce a layout file is to use the "Save Layout"
button on the Datamode->Misc menu.  You can then edit this file;
the format should be fairly self-explanatory.  The structure of the
file is similar to the .afnirc file (cf.  README.setup).  In fact,
the layout file can be included into .afnirc (since it is just another
*** section) and then setting AFNI_LAYOUT_FILE = .afnirc in the
***ENVIRONMENT section should work.

A sample layout file:

***LAYOUT
 A               geom=+73+1106                 // start controller A
 A.sagittalimage geom=320x320+59+159 ifrac=0.8 // and Sagittal image
 A.sagittalgraph geom=570x440+490+147 matrix=9 // and Sagittal graph
 B                                             // start controller B
 B.plugin.ROI_Average                          // start a plugin

Each window to be opened has a separate command line in this file.
The "geom=" qualifiers specify the size and position of the windows.
For images, "ifrac=" can be used to specify the fraction of the window
occupied by the image (if "ifrac=1.0", then no control widgets will be
visible).  For graphs, "matrix=" can be used to control the initial
number of sub-graphs displayed.  For plugins, the label on the button
that starts the plugin is used after the ".plugin." string (blanks
should be filled with underscores "_").  In the example above, the last
two windows to be opened do not have a "geom=" qualifier, so their
placement will be chosen by the window manager.

If you add "slow" after the "***LAYOUT", then each window operation
will be paused for 1 second to let you watch the layout operations
proceed gradually.  Otherwise, they will be executed as fast as
possible (which still may not be all that fast).

Using layouts with a window manager that requires user placement
of new windows (e.g., twm) is a futile and frustrating exercise.

-------------------------
Variable: AFNI_tsplotgeom
-------------------------
Related to the above, if you set this environment variable (in the
***ENVIRONMENT section, not in the ***LAYOUT section), it is used
to set the geometry of the plotting windows used for time series
plots, histograms, etc. -- all the graphs except the dataset plots.
Its format should be something like "550x350"; this example sets
the width to 550 pixels and the height to 350 pixels.  If you don't
set this, the default is "200x200", which is quite small on a high
resolution display.

--------------------------
Variables: AFNI_REALTIME_*
--------------------------
This set of variables allows you to control the initial setup of the
realtime data acquisition plugin (menu item "RT Options").  Normally,
this plugin is active only if AFNI is started with the "-rt" command
line option.  (It will consume CPU time continually as it polls for
an incoming data connection, which is why you don't want it running
by default.)  The following variables can be used to initialize the
plugin's options:

AFNI_REALTIME_Activate = This is a YES/NO variable, and allows you
                         to have the realtime plugin active without
                         using the "-rt" command line option.  If
                         this variable is set to YES, then you can
                         disable the realtime plugin with "-nort".

The variables below are used to set the initial status of the widgets
in the realtime plugin's control window.  Each one has the same name as
the labels in the control window, with blanks replaced by underscores.
The values to set for these variables are exact copies of the inputs
you would specify interactively (again, with blanks replaced by
underscores).  For details as to the meaning of these options, see
the plugin's Help window.

AFNI_REALTIME_Images_Only  = "No" or "Yes"
AFNI_REALTIME_Root         = name for datasets to be created
AFNI_REALTIME_Update       = an integer from 0 to 19
AFNI_REALTIME_Function     = "None" or "FIM" (cf. AFNI_FIM_IDEAL below)
AFNI_REALTIME_Verbose      = "No", "Yes", or "Very"
AFNI_REALTIME_Registration = "None", "2D:_realtime", "2D:_at_end",
                             "3D:_realtime", "3D:_at_end",
                             or "3D:_estimate"
AFNI_REALTIME_Base_Image   = an integer from 0 to 59
AFNI_REALTIME_Resampling   = "Cubic", "Quintic", "Heptic", "Fourier",
                             or "Hept+Four"
AFNI_REALTIME_Graph        = "No", "Yes", or "Realtime"
AFNI_REALTIME_NR           = an integer from 5 to 9999
AFNI_REALTIME_YR           = a floating point number from 0.1 to 10.0

The following internal controls can only be set using these environment
variables (there is no GUI to set these values):

AFNI_REALTIME_volreg_maxite      = an integer >= 1 [default = 9]
AFNI_REALTIME_volreg_maxite_est  = an integer >= 1 [default = 1]
AFNI_REALTIME_volreg_graphgeom   = something like 320x320+59+159

AFNI_REALTIME_CHILDWAIT = max wait time (in sec) for child info
                          process [default = 66.6]; not needed if
                          child info process is not used

AFNI_REALTIME_WRITEWAIT = if the image data pauses for this number
                          of seconds, then the datasets being constructed
                          will be written to disk [default=37.954];
                          since this output may take several seconds,
                          you may need to adjust this if you are in
                          fact doing imaging with a very long TR.
                          Note that after this wait, the plugin can
                          still receive image data -- even if the image
                          source program is silent for a very long time,
                          AFNI will still be waiting patiently for data.

AFNI_GRAPH_AUTOGRID     = By default, if the number of time points in an
                          AFNI graph viewer changes, the density of
                          vertical grid lines changes.  If you don't
                          want this to happen, set this variable to NO.

AFNI_REALTIME_MP_HOST_PORT = HOST:PORT

        When this variable is set, the realtime plugin will attempt to open a
        tcp socket to the corresponding host and port, and will send the six
        registration correction parameters for each 3D volume received by the
        plugin.  This applies only to the case of graphing 3D registration.
        The socket will be opened at the start of each run, and will be closed
        at the end.  A simple example of what to set this variable to is
        localhost:53214.
        See 'serial_helper -help' for more details.

AFNI_REALTIME_SEND_VER   = Y/N

        If AFNI_REALTIME_MP_HOST_PORT is set, the RT plugin has 3 choices
        of what to send to that port (possibly to serial_helper):
            0. the motion parameters
            1. motion params, along with average EPI values over each ROI
               in the mask dataset (if set)
            2. motion params, along with all voxel values over the mask
               dataset (including index, i,j,k and x,y,z values)
        If AFNI_REALTIME_SEND_VER is set to YES, then the plugin will offset
        the last byte of the communication HELLO string by the version number
        (0, 1 or 2).  In the case of versions 1 or 2, the plugin will send
        the number of ROIs/voxels in a 4-byte int after the HELLO string.

AFNI_REALTIME_Mask_Vals  = String (one of the listed strings)

        This allows the user to set the "Vals to Send" field from the RT
        plugin's "Mask" line.  It determines what data are sent to the remote
        MP program (e.g. serial_helper).  Valid strings are:

            None        - send nothing
            Motion_Only - send only the 6 registration parameters
            ROI_means   - send the mean EPI value per mask ROI (value) per TR
            All_Data    - send each voxel value (in mask) per TR

AFNI_REALTIME_SHOW_TIMES = Y/N

        If set, the RT plugin will output CPU times whenever motion parameters
        are sent to the remote program, allowing evaluation of timing.  The
        times are modulo one hour, and are at a millisecond resolution.

For more information about how the realtime plugin works, read the file
README.realtime.

Also see "Dimon -help" (example E "for testing complete real-time system").
Also see "serial_helper -help".

------------------------
Variable: AFNI_FIM_IDEAL
------------------------
This variable specifies the filename of the initial FIM ideal timeseries.
The main use of this would be to be able to initialize the Realtime
plugin without direct user intervention.

--------------------------
Variable: AFNI_FIM_SAVEREF
--------------------------
When you run the interactive AFNI 'fim' (from the graph viewer FIM menu),
the program saves the reference time series (and ort time series, if any)
in the new functional dataset header, with the attribute name
AFNI_FIM_REF (or AFNI_FIM_ORT).  If you do NOT want this information saved,
then set this variable to NO.  Two sample ways to use this information is
the command below:
  1dplot "`3dAttribute -ssep ' ' AFNI_FIM_REF r1_time@1+orig`"
  1dcat  "`3dAttribute -ssep ' ' AFNI_FIM_REF r1_time@1+orig`" > ref.1D
The 3 different styles of Unix quotes must be used exactly as shown here!

----------------------------------
Variable: AFNI_PLUGINS_ALPHABETIZE
----------------------------------
If this YES/NO variable is set to NO, then the plugin buttons will
not be alphabetized on the menu,  and they will appear in the
order which AFNI chooses.  Otherwise, the plugin menu buttons will
be alphabetized by default. Alphabetizing is done without regard to
case (using the C library routine strcasecmp).

----------------------------
Variable: AFNI_VOLREG_EDGING
----------------------------
This variable affects the operation of 3dvolreg, the volume registration
plugin, and the 3D registration code in the realtime acquisition plugin.
It determines the size of the region around the edges of the base volume
where the default weight will be set to zero.  Call the value of this
variable 'ee'.  If 'ee' is a plain number (e.g., 5), then it is a voxel
count, giving the thickness along each face of the 3D brick.  If 'ee' is
of the form '5%', then it is a fraction of of each brick size.  For
example, '5%' of a 256x256x124 volume means that 13 voxels on each side
of the xy-axes will get zero weight, and 6 along the z-axis.  '5%' is
the default value used by the 3D registration routines (in mri_3dalign.c)
if no other value is specified.

--------------------
Variable: AFNI_TRACE
--------------------
This variable controls the initial setting of the tracing (debugging)
code when AFNI programs startup.  If it is set to 'y', then tracing
is turned on and set to the LOW mode (like -trace in AFNI).  If it is
set to 'Y', then tracing is turned on and set to the HIGH mode (like
-TRACE in AFNI).  Anything else, and tracing is turned off.

N.B.: You can't set this variable in .afnirc and expect it to have
      any effect (and why would you want to?), since it is read from
      the environment BEFORE the .afnirc file is read in.

N.B.: At this moment (26 Jan 2001), only the AFNI program itself is
      configured for tracing.  As time goes on, the entire AFNI
      programming library and suite of programs will be revamped for
      this purpose.  The goal is to make it easier to find bugs, and
      localize crashes.

-------------------------
Variable: AFNI_TRACE_FILE
-------------------------
If this variable is set, then the output from the AFNI function tracing
macros will be written to a file with that name, rather than to stdout.
This variable cannot be set in .afnirc; the intention is to provide a
way to get 'clean' tracing output (not mixed up with other program junk)
that can be fed to Ziad Saad's AnalyzeTrace function.

------------------------
Variable: AFNI_ROTA_ZPAD
------------------------
This variable controls the amount of zero-padding used during 3D rotations
in 3drotate, 3dvolreg, etc.  It provides a default value for the "-zpad"
options of these programs.  If zero-padding is used, then this many voxels
are padded out on each edge (all 6 faces) of a 3D brick before rotation.
After the rotation, these perimeter values (whatever they might be) will
be stripped off.  If "-zpad" is used on the command line, it overrides
this value.  Zero padding during rotation is useful to avoid edge effects,
the worst of which is the loss of data off the edge of the volume during
the 4 shearing stages.

------------------------
Variable: AFNI_TO3D_ZPAD
------------------------
This variable sets the number of slices added on each z-face in datasets
created by program to3d.  It provides a default value for the "-zpad" option
of that program.  It can be set to an integer, meaning a slice count, or
a number of millimeters, meaning a minimum distance to pad:
   setenv AFNI_TO3D_ZPAD 2
   setenv AFNI_TO3D_ZPAD 16mm
If "-zpad" is used on the to3d command line, it overrides this value.
If neither is present, no zero padding is used.  Note well that this
padding is only in the z-direction, unlike that of AFNI_ROTA_ZPAD.

----------------------------
Variable: AFNI_IMAGE_MINFRAC (editable)
----------------------------
This variable sets the minimum size of an image window when it is first
opened, in terms of a fraction of the overall screen area.  By default,
this value is set to 0.02; you can override this by (for example)
   setenv AFNI_IMAGE_MINFRAC 0.05
If you set this value to 0.0, then there will be no minimum.  This is
the old behavior, where the initial window size is always 1 screen pixel
per data pixel, and can lead to image windows that are hard to resize or
otherwise use (when the dataset is small).  The largest value I recommend
for AFNI_IMAGE_MINFRAC is 0.1; however, you can set it to as large as 0.9
if you are completely crazy, but I'm not responsible for the results --
don't even think of complaining or commenting to me about problems that
arise if you try this!

----------------------------
Variable: AFNI_IMAGE_MAXFRAC
----------------------------
This variable sets the maximum size of an image window, as a fraction of
the width and height of the screen.  The default value is 0.9.  This lets
you prevent image windows from auto-resizing to be too big when you
change datasets.  Note that if you have turned on AFNI_ENFORCE_ASPECT, then
this feature will prevent you from resizing a window to be larger than
the AFNI_IMAGE_MAXFRAC fraction of the screen dimensions.

-----------------------
Variable: AFNI_AUTOGZIP (cf. AFNI_COMPRESSOR) (editable)
-----------------------
If this YES/NO variable is set to YES, then when AFNI programs write a
dataset .BRIK file to disk, they will test to see if the data is easily
compressible (at least 80%).  If so, then the GZIP compression will be
used.  (For this to work, the gzip program must be in your path.) This
can be useful if you are dealing with mask datasets, which are usually
highly compressible, but don't want the overhead of trying to compress
and decompress arbitrary MRI datasets.

A command line method to carry out compression of datasets that will
compress well is to use a csh script like the following:

  #!/bin/csh
  foreach fred ( `find . -name \*.BRIK -print` )
    ent16 -%50 < $fred
    if( $status == 1 ) gzip -1v $fred
  end

This will only gzip .BRIK files that the program ent16 estimates will
compress by at least 50%.  Note that ent16's estimate of compression
may be high or low relative to what gzip actually does.

------------------------------
Variable: AFNI_DONT_MOVE_MENUS (editable)
------------------------------
If this YES/NO variable is set to YES, then the functions that try
to move popup menus to "good" locations on screens will be skipped.
This seems to be necessary on some Solaris systems, where the menus
can end up being moved to bizarre locations.

----------------------------
Variable: AFNI_MINC_DATASETS
----------------------------
If this YES/NO variable is not set to NO, then MINC-format files
with name suffix .mnc will be read into the interactive AFNI
program at startup, along with standard .HEAD/.BRIK datasets.
That is, you have to set this variable explicitly to NO if you
don't want MINC-format files to be automatically recognized by
the interactive AFNI program.  This variable does not affect
the ability of command line programs (3dSomething) to read
.mnc input files.

----------------------------
Variable: AFNI_MINC_FLOATIZE
----------------------------
If this YES/NO variable is set to YES, then when MINC-format files
are read in as datasets, their values will be scaled to floats.
Otherwise, their values will be scaled to the same data type as
stored in the file.  In some cases, the latter behavior is not
good; for example, if a byte-valued file (intrinsic range 0..255)
is scaled to the range 0..0.5 in the MINC header, then after
conversion back to bytes, the resulting AFNI dataset values will
all be zero.  Setting AFNI_MINC_FLOATIZE to YES will cause the
scaled values to be stored as floats.

------------------------------
Variable: AFNI_MINC_SLICESCALE
------------------------------
If this YES/NO variable is set to NO, then AFNI will not use the
image-min and image-max scaling when reading data from MINC files.
Normally, you want this scaling, since MINC files are scaled separately
in each slice.  However, if the image-min and image-max values in the
MINC file are damaged, then you can turn off the scaling this way.

----------------------------
Variable: AFNI_ANALYZE_SCALE
----------------------------
If this YES/NO variable is set to NO, then the "funused1" entry
in the Mayo Analyze .hdr file will not be used as a scale factor
for the images contained in the corresponding .img file.  Otherwise,
if funused1 is positive and not equal to 1.0, all the image data
in the .img file will be scaled by this value.

-------------------------------
Variable: AFNI_ANALYZE_FLOATIZE
-------------------------------
If this YES/NO variable is set to YES, then Mayo Analyze files
will be scaled to floats on input.  Otherwise, they will be read
in the format in which they are stored in the .img file.  Conversion
to floats can be useful if the scaling factor is such that the image
native format can't hold the scaled values; for example, if short
values in the image range from -1000..1000 and the scale factor
is 0.0001, then the scaled values range from -0.1..0.1, which would
be truncated to 0 in the scaled image if it is not "floatized".
(Conversion to floats will only be done to byte, short, and int
image types.)

---------------------------------
Variable: AFNI_ANALYZE_ORIGINATOR
---------------------------------
If this YES/NO variable is set to YES, then AFNI will attempt
to read and use the ORIGINATOR field in a Mayo Analyze file
to set the origin of the pixel space in AFNI.  This origin
can be used directly by several programs--the main AFNI viewer,
and all of the 3dxxxx programs, including especially 3dcopy,
which is the preferred way to convert an Analyze format file
to an AFNI native file.
This variable will also force 3dAFNItoANALYZE to write the
ORIGINATOR field into the output Analyze file based on the
input AFNI file's origin information.
The ORIGINATOR field should be compatible with SPM in most
cases, but please verify this.

--------------------------
Variable: AFNI_START_SMALL
--------------------------
If this YES/NO variable is set to YES, then when AFNI starts, it will
look for the smallest datasets in the first session, and choose these
as its starting point.  This can be useful if you also use the layout
feature to pop open an image window on startup; in that case, if the
default starting dataset (the first alphabetical) is huge, you won't
see anything while the program reads all of into memory before displaying
the first image.

---------------------------
Variable: AFNI_MENU_COLSIZE
---------------------------
This numerical variable sets the maximum number of entries in a popup
menu column (e.g., like the sub-brick menus for bucket datasets).  The
default value is 20, but you may want to make this larger (say 40).  When
you have a menu with a huge number of entries, the menu can become so
wide that it doesn't fit on the screen.  Letting the columns be longer
will make the menus be narrower across the screen.

Another way to get a handle on such huge menus is to Button-3 (right)
click on the label to the left of the menu.  This will popup a one-
column scrollable list chooser that is equivalent to the menu.  In this
way, it is still possible to use menus that have hundreds of entries.
The maximum number of entries shown at one time in a scrollable list
chooser is given by variable AFNI_chooser_listmax if it exists, otherwise
by AFNI_MENU_COLSIZE.

-----------------------------
Variable: AFNI_GLOBAL_SESSION
-----------------------------
This variable, if it exists, is the name of a directory that contains
"global" datasets - ones that you want to be visible in each "Switch Underlay"
or "Switch Overlay" menu.  Pointers to the datasets read from this directory
will be appended to the dataset list for each directory read from the
command line.  In the "Switch" choosers, these datasets are marked with
the character 'G' at the right, and they appear last in the list.

It really only makes sense to put +tlrc datasets in the global session
directory, since only they can be presumed to be aligned with other datasets.
Also, it is probably best if you make sure each global anatomical dataset
has itself as the anatomy parent; this can be enforced by issuing the command
  3drefit -apar SELF *.HEAD
in the global session directory.

When you Switch Sessions and are viewing a global dataset, it is likely that
you will NOT be viewing the same dataset after the Switch Session.  You will
have to then Switch Underlay and/or Switch Overlay to get back to the same
global dataset(s).

If you start AFNI and there are no datasets in the sessions given on the
command line, then the directory specified by this variable becomes the
default session.  If there are no datasets there, either, then AFNI makes
up a dummy dataset (AFNI cannot operate without at least one dataset).

------------------------------
Variable: AFNI_DISP_SCROLLBARS (editable)
------------------------------
If this YES/NO variable is set to YES, then the 'Disp' control window
(on the image viewers) will have scrollbars attached.  This window has
grown larger over the years, and for some people with pitifully small
displays (e.g., laptops), it is now taller than their screens.  If
activated, this option will prevent the Disp window from being so tall
and will attach scrollbars so you can access all of its contents.

Note: If you change this value interactively, via Edit Environment, the
change will only affect Disp windows opened after you 'Set' the variable.
That is, already opened Disp windows won't suddenly get scrollbars if
you change this to YES.

------------------------------
Variable: AFNI_GRAPH_TEXTLIMIT (editable)
------------------------------
This numerical variable sets the upper limit on the number of rows shown
in the Button-3 popup in a sub-graph.  If the number of rows in the popup
would be more than this value, a text window with scrollbars is used
instead of a "spring-loaded" menu pane.  If you set this value to 1, then
the text window will always be used.  Note that a text window does not
automatically popdown, but must be explicitly dismissed by the user
pressing the "Quit" button.

-----------------------------
Variable: AFNI_GRAPH_BASELINE
-----------------------------
This variable should be set to one of the strings "Individual", "Common",
or "Global", corresponding to the choices on the Opt->Baseline menu in
a graph window.  (Actually, only the first letter of the string will be
used.)  This variable will determine the initial setting of the Baseline
menu when a graph window opens.

-------------------------------
Variable: AFNI_GRAPH_GLOBALBASE
-------------------------------
Normally, the global baseline for a graph window is set to the smallest
value found in the entire 3D+time dataset.  This variable lets you specify
a numerical value to be used for this purpose instead.  Probably the most
common setting (for those who want to use this feature at all, which means
Mike Beauchamp) would be
  setenv AFNI_GRAPH_GLOBALBASE 0
Of course, you can always change the global baseline from the Opt->Baseline
menu.

--------------------------
Variable: AFNI_VALUE_LABEL (editable)
--------------------------
If this YES/NO variable is set to YES, then the data value label on the
Define Overlay control panel will be turned on when only 1 or 2 image
viewer windows are open.  This will consume more CPU time and redrawing
time than the default, which is that this label is only turned on when
all 3 image viewer windows are open.  If you are operating X11 remotely
over a slow connection, this option should not be turned on.

----------------------------
Variable: AFNI_SUMA_BOXCOLOR (editable)
----------------------------
This string defines the color used for overlaying surface nodes transmitted
from SUMA to AFNI.  The default is an orangish-yellow.  If you like yellow,
then do
   setenv AFNI_SUMA_COLOR yellow
If this is set to "none", then these lines won't be plotted.

-----------------------------
Variable: AFNI_SUMA_LINECOLOR (editable)
-----------------------------
This string defines the color used for overlaying the intersection of SUMA
surface triangles with image slice planes.  The default is white.  If this
is set to "none", then these lines won't be plotted.

---------------------------
Variable: AFNI_SUMA_BOXSIZE (editable)
---------------------------
This variable defines the size of the boxes drawn at each surface node
transmitted from SUMA.  The default is 0.25, which means that each box is
plus and minus 1/4 of a voxel size about the node location. If you want a
larger box, you could try
   setenv AFNI_SUMA_BOXSIZE 0.5

----------------------------
Variable: AFNI_SUMA_LINESIZE (editable)
----------------------------
This variable sets the thickness of the lines used when drawing a surface
intersection overlay.  The units are the width of the entire image;
reasonable values are in the range 0..0.01; 0 means to draw the thinnest
line possible.  Since this is editable, you can experiment with it to
see what looks good.

-------------------------
Variable: AFNI_NIML_START
-------------------------
If this YES/NO variable is set to YES, then NIML listening will be engaged
from when AFNI starts.  You can also enable NIML from the command line with
the option "-niml", and from the Datamode->Misc menu item "Start NIML".

NIML is the mechanism by which AFNI talks to other programs - it is the
successor to plugouts.  At this moment (Mar 2002), the only external NIML
program is SUMA - the surface mapper.

---------------------------
Variable: AFNI_KEEP_PANNING (editable)
---------------------------
If this YES/NO variable is set to YES, then when the Zoom pan gets turned
on in the AFNI image viewer, it will stay on until it is explicitly turned
off.  (The default behavior is to turn panning off after the user releases
the mouse button.)

-------------------------------
Variable: AFNI_IMAGE_LABEL_MODE
-------------------------------
This integer determines the placement of the image coordinate labels drawn
in the AFNI image viewer windows.  The possible values are
   0  =  Labels are off
   1  =  Labels in upper left
   2  =  Labels in upper right
   3  =  Labels in lower left
   4  =  Labels in lower right
   5  =  Labels in upper middle
   6  =  Labels in lower middle
You can also control the placement and size of the labels from the Button-3
(right-click) popup menu attached to the intensity bar to the right of the
image sub-window.

-------------------------------
Variable: AFNI_IMAGE_LABEL_SIZE
-------------------------------
This integer determines the size of the image coordinate labels:
   0  =  Small
   1  =  Medium
   2  =  Large
   3  =  Huge
   4  =  Enormous

--------------------------------
Variable: AFNI_IMAGE_LABEL_COLOR (editable)
--------------------------------
This variable controls the color of the image coordinate labels.

----------------------------------
Variable: AFNI_IMAGE_LABEL_SETBACK (editable)
----------------------------------
This variable, a floating point value between 0 and 0.1, determines how
far from the edge an image coordinate label will be drawn.  The units are
fractions of the image width/height.

------------------------------
Variable: AFNI_CROSSHAIR_LINES (editable)
------------------------------
If this YES/NO variable is set to YES, then the image crosshairs will be
drawn using lines rather than pixels.  By default (this is the original
AFNI way), crosshair lines are drawn the same way as functional overlays:
by putting color pixels on top of the image.  The new way draws lines on
top of the image instead.  The difference is quite visible when the image
is zoomed; overlay by pixels shows the crosshair lines as fat blobs, but
the lines are drawn as thin as possible, no matter what the image window
size and zoom factor.

Good points about crosshairs drawn with lines:
 - They are less obtrusive than pixel overlay, especially if you zoom
     or enlarge the image a lot
 - When doing a montage with Spacing=1, they'll look better in the
     orthogonal slices.
Good points about crosshairs drawn with pixel overlay:
 - Pixel overlays can be rendered as translucent (on X11 TrueColor displays);
     geometrical overlays are always solid color.
So you have to decide what you need most.  You can change this item using
the "Edit Environment" pseudo-plugin on the Datamode->Misc menu, so you
can play with it interactively to get the features you want.

----------------------------
Variable: AFNI_CROP_ZOOMSAVE (editable)
----------------------------
When saving a zoomed image, the default is to save the entire zoomed image,
not just the part you see.  If this YES/NO variable is set to YES, then only
the visible part will be saved.

---------------------------
Variables: AFNI_TLRC_BBOX_*
---------------------------
These variables let you choose the size of the "Talairach Box", into which
+tlrc datasets are transformed.  If defined, they should be positive values,
in mm.  The 5 variables (any or all of which may be used) are:

  AFNI_TLRC_BBOX_LAT = distance from midline to maximum left/right position
                        [default=80]
  AFNI_TLRC_BBOX_ANT = distance from AC to most anterior point in box
                        [default=80]
  AFNI_TLRC_BBOX_POS = distance from AC to most posterior point in box
                        [default=110]
  AFNI_TLRC_BBOX_INF = distance from AC-PC line to most inferior point in box
                        [default=55 for small box, 65 for big box]
  AFNI_TLRC_BBOX_SUP = distance from AC-PC line to most superior point in box
                        [default=85]

For example, "setenv AFNI_TLRC_BBOX_INF 100" lets you define the +tlrc box
to extend 100 mm below the AC-PC line.  Please note that virtually all the
3d* analysis programs (3dANOVA, etc.) do voxel-by-voxel analyses.  This fact
means that you will be unable to compare datasets created in +tlrc coordinates
with different box sizes.  Also, you will be unable to overlay regions from
the Talairach Daemon database onto odd-sized +tlrc datasets.  Therefore, I
recommend that these variables be used only when strictly needed, and with
caution.

Lastly, try hard not to mix TLRC datasets created with various box sizes in
the same session. Strange things may happen.

---------------------------
Variables: AFNI_ACPC_BBOX_*
---------------------------
The variables let you choose the size of the "ACPC Box", into which
+acpc datasets are transformed. If defined, they should be positive values,
in mm.  The 6 variables (any or all of which may be used) are:

  AFNI_ACPC_BBOX_LAT = distance from midline to maximum left/right position
                        [default=95]
  AFNI_ACPC_BBOX_ANT = distance from AC to most anterior point in box
                        [default=95]
  AFNI_ACPC_BBOX_POS = distance from AC to most posterior point in box
                        [default=140]
  AFNI_ACPC_BBOX_INF = distance from AC-PC line to most inferior point in box
                        [default=70]
  AFNI_ACPC_BBOX_SUP = distance from AC-PC line to most superior point in box
                        [default=100]

Check example and heed ALL warnings for variables AFNI_TLRC_BBOX_* above.

-------------------------
Variable: AFNI_TTRR_SETUP
-------------------------
Name of a file to be loaded to define Talairach Atlas Colors, when the Atlas
Colors control panel is first created.  Format is the same as a file created
from this control panel's "Save" button.  This filename should be an absolute
path (e.g., /home/yourname/.afni_ttcolors), since otherwise it will be read
relative to the directory in which you start AFNI.

-----------------------------
Variable: AFNI_LOAD_PRINTSIZE
-----------------------------
AFNI will print (to stderr) a warning that it is loading a large dataset from
disk.  This value determines the meaning of "large".  For example, setting
this variable to "40M" means that loading a dataset larger than 40 Megabytes
will trigger the warning.  If not given, the default value is 100 Megabytes.
The purpose of the warning is just to let the user know that it may be several
seconds before the dataset is loaded (e.g., before the images appear).  If you
don't want this warning at all, set this variable to the string "0".

-------------------------------
Variable: AFNI_ANALYZE_DATASETS
-------------------------------
If this YES/NO variable is not set to NO, then ANALYZE-format files with name
suffix .hdr will be read into the interactive AFNI program at startup, along
with standard .HEAD/.BRIK datasets.  That is, you have to set this variable
explicitly to NO if you don't want ANALYZE-format files to be automatically
recognized by the interactive AFNI program.  This variable does not affect
the ability of command line programs (3dSomething) to read .hdr input files.

-----------------------------
Variable: AFNI_ANALYZE_ORIENT
-----------------------------
ANALYZE .hdr files do not contain reliable information about the orientation
of the data volumes.  By default, AFNI assumes that these datasets are
oriented in LPI order.  You can set this variable to a different default
order.  See AFNI_ORIENT for details on the 3 letter format for this.

---------------------------------
Variable: AFNI_ANALYZE_AUTOCENTER
---------------------------------
ANALYZE .hdr files do not contain information about the origin of coordinates.
The default AFNI approach mirrors that of FSL - the outermost corner of the
first voxel in the dataset is set to (0,0,0).  If you set this variable
(AFNI_ANALYZE_AUTOCENTER) to YES, then instead (0,0,0) will be set to the
center of the 3D ANALYZE array.  This is the default that would be applied
if you read the ANALYZE array into program to3d.

----------------------------
Variable: AFNI_VERSION_CHECK
----------------------------
If this YES/NO variable is set to NO, then AFNI will not try to check if its
version is up-to-date when it starts.  Otherwise, it will try to check the
program version with the AFNI web server.

-------------------------
Variable: AFNI_MOTD_CHECK
-------------------------
Similarly, if this YES/NO variable is set to NO, then AFNI will not display
and fetch the AFNI "Message of the Day" at startup.  You can always check
the MOTD by using the Datamode->Misc menu.

-----------------------------------
Variable: AFNI_SLICE_SPACING_IS_GAP
-----------------------------------
This YES/NO variable is designed to patch a flaw in some DICOM files, where
the "Spacing Between Slices" attribute is erroneously set to the gap between
the slices, rather than the center-to-center slice distance specified in
the DICOM standard.  If this variable is set to YES, then the "Slice Thickness"
attribute will always be added to "Spacing Between Slices" to get the z voxel
size (assuming both attributes are present in the DICOM file).

To check if a DICOM file has this problem, you can read it into to3d with
the command "to3d suspect_file_name".  A warning will be printed to the
terminal window if attribute "Spacing Between Slices" is less than
attribute "Slice Thickness".  Another way to check is with a command like so

  dicom_hdr suspect_file_name | grep "Slice"

then check if the "Spacing Between Slices" and "Slice Thickness" values are
correct for the given acquisition.  We have only seen this problem in GE
generated DICOM files, but that doesn't mean it won't occur elsewhere.

If this variable is set to NO, then this patchup will never be made.
The z voxel size will be set to "Spacing Between Slices" if present,
otherwise to "Slice Thickness".  This may be needed for some Phillips pulse
sequences, which can report "Spacing Between Slices" < "Slice Thickness".
In such a case, if this variable is not set, the wrong z voxel size will
be assigned!

If this variable is not set at all, AND if "Spacing Between Slices" is less
less than 0.99 times "Slice Thickness", it will be treated as a gap;
that is, the z voxel size will again be set to "Spacing Between Slices" +
"Slice Thickness" if "Spacing Between Slices" < 0.99*"Slice Thickness".
Otherwise, the z voxel size will be set to the larger of
"Spacing Between Slices" and "Slice Thickness".

N.B.: "YES", "NO", and "not set" have 3 different sets of behavior!
      In the summary below, if a variable isn't set, treat it as zero:

  YES     => dz = Thickness + Spacing
  NO      => dz = Spacing if present, otherwise Thickness
  not set => if( Spacing > 0 && Spacing < 0.99*Thickness )
               dz = Thickness + Spacing
             else
               dz = MAX( Thickness , Spacing )

If neither variable is set, then dz=1 mm, which is probably wrong.

Sorry about this complexity, but the situation with various manufacturers
is complicated, murky, and confusingly maddening.

---------------------------------------------------
Variables: AFNI_DICOM_RESCALE and AFNI_DICOM_WINDOW
---------------------------------------------------
DICOM image files can contain rescaling and windowing "tags".  If present,
these values indicate to affinely modify the values stored in the file.
As far as I can tell, "rescale" means that the values should always be
modified, whereas "window" means the values should be modified for display
purposes.  If both are present, the rescale comes before window.  These
two YES/NO environment variables control whether the AFNI image input
functions (used in to3d) should apply the rescale and window tags.

It is my impression from the laconic, terse, and opaque DICOM manual that
window tags are intended for display purposes only, and that they aren't
needed for signal processing.  But you'll have to examine your own data to
decide whether to use these options -- manufacturers seem to differ.
Plus, I don't have that much experience with DICOM data from many different
sources.

-----------------------
Variable: IDCODE_PREFIX
-----------------------
AFNI stores with each dataset a unique string, called an "idcode".  An example
is "XYZ_MoNLqdNOwMNEYmKSBytfJg".  You can alter the first three characters of
the idcode with this variable.  For example,
  setenv IDCODE_PREFIX RWC
sets the first 3 characters of newly generated idcodes to be the initials of
AFNI's author.  I find this a handy way to "brand" my datasets.  Of course,
there are only 17576 possible 3 letter combinations (140608 if you allow for
case), so you should claim your prefix soon!!!

Idcodes are used to store links between datasets.  For example, when SUMA
sends a surface to AFNI, it identifies the dataset to which the surface is
to be associated with the dataset's idcode.  Similarly, when AFNI sends a
color overlay to SUMA, it uses the surface idcode to indicate which surface
family the overlay is to be mapped onto.

-------------------------
Variable: AFNI_AGIF_DELAY
-------------------------
This is the time delay between frames when writing an animated GIF file from
an image viewer window.  The units are 100ths of seconds (centi-seconds!);
the default value is 20 (= 5 frames per second).

-----------------------------
Variable: AFNI_MPEG_FRAMERATE
-----------------------------
This value sets the frame rate (per second) of the MPEG-1 output animation
from the image viewer window.  The legal values allowed by MPEG-1 are
24, 25, 30, 50, and 60; 24 (the slowest) is the default.  Note that the
MPEG-1 standard does NOT allow arbitrary framerates, only these listed.
To further slow down an MPEG-1 animation in AFNI, use the AFNI_ANIM_DUP
variable, described below.

-----------------------
Variable: AFNI_ANIM_DUP (editable)
-----------------------
This value sets the frame duplication factor for AGIF or MPEG animation
output.  If this value 'd' is between 1 and 99, then each frame (image)
will be written out 'd' times before being incorporated into the
movie file.  Note that AFNI_AGIF_DELAY can be used to slow down an
AGIF file more efficiently, but that there is no other way (within AFNI)
to slow down an MPEG file.  (Some MPEG movie players will let you slow
down the animation, but that's outside of AFNI's control.)

-----------------------------
Variable: AFNI_STARTUP_SCRIPT
-----------------------------
If this is set, this is the name of an AFNI Script to run when AFNI first
starts.  (See the file README.driver for information about AFNI Scripts.)
If this is not set, it defaults to ".afni.startup_script".  The program first
tries to read this filename from the current working directory; if that fails,
then it tries to read from your home directory.  No error message is given
if neither file can be read.

You can save a file ".afni.startup_script" that will recreate the window
layout you currently have.  Use the "Datamode->Misc->Save Layout" button
and press "Set" on the popup control without entering any filename.  Instead
of a Layout file (cf. AFNI_LAYOUT_FILE above), you'll get a Script file
if you leave the filename blank or enter any filename with the string
"script" included (e.g., "coolstuff.script").

The capabilities of Script files are being expanded.  Not all features of
the AFNI window setup are currently save-able this way.

You can load a Script file interactively during an AFNI run by using the
button "Datamode->Misc->Run Script".  As a 'secret' option, if you enter
a line containing a blank in the the filename dialog, that line will be
executed as a single command, rather than be used as a script filename.

------------------------------
Variable: AFNI_DEFAULT_OPACITY
------------------------------
This should be set to an integer from 1..9, and controls the default opacity
setting for the color overlay in image viewer windows.

-----------------------------
Variable: AFNI_DEFAULT_IMSAVE
-----------------------------
This should be set to the suffix of the image format to which you want to save
from an image viewer.  The suffixes AFNI knows about (as of 23 Jan 2003) are
 ppm = Portable PixMap format                            = cat
 jpg = Joint Photographics Experts Group (JPEG) format   = cjpeg
 gif = Compuserve Graphics Interchange File (GIF) format = ppmtogif
 tif = Tagged Image File Format (TIFF)                   = ppm2tiff or pnmtotiff
 bmp = Windows Bitmap (BMP) format                       = ppmtobmp
 eps = Encapsulated PostScript format                    = pnmtops
 pdf = Portable Document Format                          = epstopdf
 png = Portable Network Graphics format                  = pnmtopng
The third column is the name of the external filter program that AFNI uses to
write the format.  If a filter is not present on your system, then that option
is not available.  Most of these filters are part of the netpbm package.

-----------------------------
Variables: AFNI_COLORSCALE_xx  for xx=01, 02, ..., 99
-----------------------------
These variables let you name files to be read it at AFNI startup to define
"continuous" colorscales for the "**" mode of the color pbar.  These files
will be looked for in the current directory when you start AFNI, or in your
home directory (if they aren't in the current directory).  A sample file:

  Yellow-Red-Blue
  1.0 #ffff00
  0.7 #ffaa00
  0.5 #ff0000
  0.3 #aa00aa
  0.0 #0000ff

The first line is the name of this colorscale, to go in the colorscale popup
chooser.  The succeeding lines each have a number and a color definition.
The numbers should be decreasing, and indicate the location on the colorscale.
The largest number corresponds to the top of the colorscale and the smallest
to the bottom - intermediate numbers denote intermediate locations.  The colors
at each location are specified using X11 notation (cf. "man XParseColor").
In this example, I'm using hexadecimal colors, in the form #rrggbb, where each
hex pair ranges from 00 to ff.  Another color format is "rgbi:rf/gf/bf",
where each value rf,gf,bf is a number between 0.0 and 1.0 (inclusive); for
example, yellow would be "rgbi:1.0/1.0/0.0".

Colors are interpolated (linearly in RGB space) between the break locations
given in the file.  There are actually 128 color locations on a colorscale.

An alternative format for the file is to omit the numbers indicating the
break locations.  In this case, the break locations will be taken to be
equally spaced.  For example:

  Yellow-Red-Blue
   #ffff00
   #ffaa00
   #ff0000
   #aa00aa
   #0000ff

This example is not exactly the same as the other one, since the breakpoints
are evenly spaced now (as if they had been given as 1.0, 0.75, 0.5, 0.25,
and 0.0).  With this format, if you want to manually specify all 128 colors,
you can do so, 1 color per line, remembering that the first line of the file
is taken to be the colorscale title (no blanks allowed in the title!).

---------------------------------
Variable: AFNI_COLORSCALE_DEFAULT
---------------------------------
If set, this is the name of the default colorscale to use in setup.  As a
special case, if you DO NOT want a colorscale to be setup by default at all,
then set this variable to the string "NO".
N.B.: This variable only applies if you are using AFNI with a TrueColor X11
visual.  If you are using a PseudoColor visual, then this variable is ignored!

----------------------------
Variable: AFNI_RESCAN_METHOD
----------------------------
On 28 Dec 2002, I modified the way that the "Rescan" operation in AFNI works
when re-reading datasets from sessions.  The old way would purge and replace
all datasets; the new way just adds datasets that didn't exist before.
There are some differences between these methods:
  "Replace" will detect changes to a dataset, so if you add a brick using
    3dTcat -glueto (for example), this will be reflected in AFNI.
  "Replace" will cause troubles if you are using a dataset in a plugin;
    the two main examples are volume rendering and the drawing plugin.
    This problem will occur even if you didn't do anything to the dataset
    on disk, since the internal pointer to the dataset will have been
    changed by the rescan, but the plugins won't know that.
  "Add" will not detect changes to a dataset on disk, but it also won't
    affect the pointers to the existing datasets.
You can choose to use the "Replace" method (the old style) by setting
this environment variable to the string "REPLACE".

---------------------------
Variable: AFNI_OLD_PPMTOBMP
---------------------------
The old (before 21 Feb 2003) usage of netpbm program "ppmtobmp" was to
write a color image quantized to 255 colors.  The new usage is to write
a 24-bit image, which is thus not color-quantized.  If you want the old
behavior, set this environment variable to YES.  This setting (YES) will
be necessary if you have an older version of ppmtobmp in your path, which
doesn't support the "-bpp" option.

------------------------------
Variable: AFNI_1DPLOT_COLOR_xx
------------------------------
This variable lets you set the colors used in the 1dplot program (and other
similar graphs).  Here, "xx" is a number from "01" to "19".  The value of
the environment variable must be in the form "rgbi:rf/gf/bf", where each
color intensity (rf, gf, bf) is a number between 0.0 and 1.0.  For example,
"rgbi:1.0/1.0/0.0" is yellow.  By default, the first 4 colors are defined
as the equivalents of
  setenv AFNI_1DPLOT_COLOR_01 rgbi:0.0/0.0/0.0
  setenv AFNI_1DPLOT_COLOR_02 rgbi:0.9/0.0/0.0
  setenv AFNI_1DPLOT_COLOR_03 rgbi:0.0/0.7/0.0
  setenv AFNI_1DPLOT_COLOR_04 rgbi:0.0/0.0/0.9
which are black, red, green, and blue, respectively.  You can alter these
colors, or leave them unchanged and start defining colors at 05.  The largest
color number you define will be the last color index used; if more line colors
are needed, they will cycle back to color 01.  If you leave a gap in the
numbering (e.g., you define color 07 but not 05 or 06), then the undefined
colors will be fuliginous.

[Dec 2007] You can now specify the colors by using the special names 'green',
'red', 'blue', 'gold', and 'purple'.  Also, by using 3 or 6 digit hexadecimal
notation as in '#8888aa' for a blueish-gray color (6 digits) or '#0ac' for a
cyanish color (3 digits).  These are intended to make life a little simpler.

--------------------------
Variable: AFNI_1DPLOT_THIK (editable)
--------------------------
This numeric variable lets you control the thickness of lines drawn in the
1dplot-style windows.  The units are in terms of the width of the entire
plot, so that a value of 0.005 is 'reasonable'; 0.01 will be fairly thick
lines, and 0.02 will be too thick for most purposes.

----------------------------
Variable: AFNI_1DPLOT_IMSIZE
----------------------------
This numeric variable sets the image size (in pixels across the screen) of
images saved via the '-png' or '-jpg' options of 1dplot, or images saved
when giving the '.png' or '.jpg' from 1dplot-style graphs.  The default
value is 1024.  Values over 2048 may give odd looking results, and will
be palpably slower to render.

---------------------------------
Variable: AFNI_SIEMENS_INTERLEAVE
---------------------------------
The old (pre-DICOM) Siemens .ima image mosaic format sometimes stores the
multi-slice EPI data in correct spatial order and sometimes in correct
time acquisition order.  In the latter case, the images are stored in
a spatially-interleaved fashion.  As far as I know, there is no way to
tell this from the .ima file header itself.  Therefore, if you have a
problem with such files, set this variable to YES to un-interleave the
images when to3d reads them.  One way to tell if the images need to be
un-interleaved is to do
  afni -im fred.ima
then look at the images in an Axial image viewer.  If the slices make up
a single coherent volume, then they are NOT interleaved.  If the slices
look like they make up 2 separate brain volumes, then they need to be
un-interleaved, and you need to set this variable to YES.

-----------------------------
Variable: AFNI_TRY_DICOM_LAST
-----------------------------
When to3d tries to read an image file, it guesses the format from the
filename.  However, this doesn't always work.  In particular, DICOM files
don't have any fixed filename suffix or prefix.  If all else fails, to3d
normally tries to read a file as a DICOM file, and as a last resort, as
a flat binary file.  However, if a file is NOT a DICOM file, the DICOM
reading function will print out a lot of error messages, since there is
also no standard internal marker in all DICOM files that identify them.
Most people don't like all these messages (perhaps hundreds per file),
even if the program then successfully reads their flat binary files.

If this YES/NO variable is set to YES, then the normal last-resort order
of reading described above is reversed.  If to3d can't read the file any
other way, it will try it as a flat binary file.  If that fails, then
DICOM will be the ultimate resort, instead of being the penultimate
resort that it is by default.  This may help elide some error messages.
However, if you have a DICOM file that is exactly 131072 bytes long
(for example), then it will be interpreted as a headerless 256x256 image
of shorts, instead of whatever it really is.  So only set this variable
to YES if necessary!

-----------------------------
Variable: AFNI_THRESH_BIGSTEP
-----------------------------
The AFNI threshold sliders (in the Define Overlay control panels and the
Render Dataset plugins) are divided into 10000 steps from bottom to top.
If you click in the trough or use the PageUp/PageDown keys, the default
action is to move the slider 10 of the steps at once.  (The up and down
arrow keys move 1 step at a time.)  You can change this big step from the
default of 10 to any value between 1 and 1000 by setting this environment
variable; for example
  setenv AFNI_THRESH_BIGSTEP 100
will move the slider 1% of its height per PageUp/PageDown key or mouse click.

--------------------------
Variable: AFNI_THRESH_AUTO (editable)
--------------------------
If this YES/NO variable is set to YES, then whenever you switch overlay datasets,
the function threshold slider will automatically change to some value that may
be appropriate for the values in the new dataset.  [This is for Ziad!]

------------------------------
Variable: AFNI_SNAPFILE_PREFIX
------------------------------
Image files saved with the "snapfile" (or "record to file") by default have
filenames of the form "S_000001.ppm".  The prefix "S" can be altered by
setting this environment variable; for example,
  setenv AFNI_SNAPFILE_PREFIX Elvis
will save snapfiles with names like "Elvis_000666.ppm".  You can view snapfiles
with the "aiv" ("AFNI Image Viewer") utility, the "xv" program, or many other
Unix utilities.

-------------------------------
Variable: AFNI_STARTUP_WARNINGS
-------------------------------
When the interactive AFNI program starts, it may pop up warnings about the
programming environment for which it was compiled.  At this time, there are
two such warning messages possible:
  LessTiff: AFNI will work with LessTif, but works better with Motif.
  Button-3: On Solaris 2.8, Button-3 popup menus don't work quite properly.
If you are tired of seeing these messages, set AFNI_STARTUP_WARNINGS to NO.

----------------------
Variable: AFNI_1D_TIME
----------------------
If this YES/NO variable is set to YES, then when a multicolumn .1D file is
read in as an AFNI dataset, the column variable is taken to be time, and
a time-dependent dataset is created.  The default is to create a bucket
dataset.  Note that each row is taken to be a separate 'voxel'.

-------------------------
Variable: AFNI_1D_TRANOUT
-------------------------
If this variable is set to YES, it affects the way 1D datasets are
written out from 3d* programs that are being used to process 1D files
as AFNI dataset.  If this variable is YES, AND if the output dataset
prefix ends in '.1D' or is the string '-' (meaning standard output),
then the output 1D file will be transposed and written so that the
time axis goes down the columns instead of across them.  If this
variable is NO, then the standard AFNI 1D-to-3D dataset convention is
followed: each row is a single voxel time series.  Example:
  3dDetrend -polort 1 -prefix - 1D:'3 4 5 4 3'\'
will write to the screen
           -0.8
            0.2
            1.2
            0.2
           -0.8
if AFNI_1D_TRANOUT is YES, but will write
 -0.8 0.2 1.2 0.2 -0.8
to stdout if AFNI_1D_TRANOUT is NO.

-------------------------
Variable: AFNI_1D_TIME_TR
-------------------------
If this is set, and AFNI_1D_TIME is YES, then this determines the TR (in
seconds) of a .1D file read in as an AFNI dataset.

------------------------
Variable: AFNI_3D_BINARY
------------------------
If this is set to YES, then .3D files are written by AFNI programs in
binary, rather than the default text mode.  Binary files will be more
compact (usually) and faster to read in.

--------------------------
Variable: AFNI_MAX_OPTMENU (editable)
--------------------------
This variable (default=255) sets the maximum number of entries allowed
in an AFNI "option menu" -- these are the buttons that popup a menu
of values from which to choose, and which also let you popup a text
list chooser by right-clicking in the menu's label.  (Example: the
sub-brick option menus "Anat", "Func", "Thr" on the "Define Overlay"
control panel.)

Some computer systems may crash when an option menu gets too big.
That's why there is a default limit in AFNI of 255 entries.  However,
if you have a bucket dataset with more than 255 sub-bricks, this makes
it impossible to view the later data volumes.  If this problem arises,
you can try setting this environment variable to a larger limit (e.g.,
99999 would take care of all currently imaginable cases).

---------------------------------
Variable: AFNI_VALUE_LABEL_DTABLE
---------------------------------
This variable sets a filename that holds a default value-label table
for the Draw Dataset plugin.  A sample file is shown below:

   
    "1" "elvis"
    "2" "presley"
    "3" "memphis"
   

The 'ni_dimen' attribute is the number of value-label pairs; in the
  above example it is 3.
Each value-label pair is shown on a separate line.  The values and
  labels are strings, enclosed in quote characters.  There should be
  exactly as many value-label pairs as specified in 'ni_dimen'.
If you really want to put a double quote character " in a label,
  you can enclose the label in single forward quotes ' instead.
When you 'Save' a drawn dataset from the Draw Dataset plugin, the
  .HEAD file attribute VALUE_LABEL_DTABLE will contain a table in
  exactly this XML-based format.

-------------------------------
Variable: AFNI_STROKE_THRESHOLD (editable)
-------------------------------
If you press Button-1 in an image window, and then move it left or
right ("stroke it") before releasing the button, the grayscale mapping
changes in the same way as if you pressed the 'c' button up and the 'b'
button down.  This variable sets the threshold for the stroking movement
size in pixels; a movement of this number of pixels rightwards corresponds
to one press of 'c' up and 'b' down, while a leftwards movement is like
one press of 'c' down and 'b' up.  Larger movements make larger adjustments.

A larger threshold makes the stroking less sensitive; a smaller threshold
makes it more sensitive.  The value you choose will depend on your personal
taste.  The default is 32 pixels, which is the flavor I prefer.  If you set
this variable to 0, then the stroking function is disabled.

-------------------------------
Variable: AFNI_STROKE_AUTOPLOT (editable)
-------------------------------
If this variable is set to YES, then the graymap-versus-data value plot
(manually controlled by "Display Graymap Plot") is automatically popped
up when the grayscale mapping is altered by using the stroking feature
described above.  When the stroke is finished, the plot will pop down.
N.B.: when the 'Draw Dataset' plugin is active, this option is disabled
temporarily.

-----------------------------
Variable: AFNI_IMAGE_MINTOMAX (editable)
-----------------------------
If this variable is set to YES, then image viewer windows will be set
to the "Min-to-Max" state rather than the default "2%-to-98%" state
when they are opened.  If you set this in the "Edit Environment"
control, it only affects image viewer windows opened after that point.

----------------------------
Variable: AFNI_IMAGE_CLIPPED (editable)
----------------------------
If this variable is set to YES, then image viewer windows will be set
to the "Clipped" state rather than the default "2%-to-98%" state
when they are opened.  If you set this in the "Edit Environment"
control, it only affects image viewer windows opened after that point.

----------------------------
Variable: AFNI_IMAGE_CLIPBOT (editable)
----------------------------
In the "Clipped" mode, the top level of the grayscale image is computed
as 3.11 times the 'cliplevel' as computed by the 3dClipLevel algorithm.
The bottom level is then a fraction of this top level -- by default, the
fraction is 0.25, but you can change this default by setting this variable
to a value between 0.0 and 0.5 (inclusive).  You can also use variable
AFNI_IMAGE_CLIPTOP to scale the default top level -- this variable can take
values between 0.6 and 1.9 (inclusive) -- the default is 1.0.

--------------------------------
Variable: AFNI_IMAGE_GLOBALRANGE (editable)
--------------------------------
If this variable is set to YES, then the image viewer windows will be
set to scale the bottom gray level to the minimum value in the 3D
volume and the top gray level to the maximum value in the 3D volume.
This setting overrides the "Min-to-Max" and "2%-to-98%" settings in
the "Disp" control panel.  This setting also applies to all image
viewers.  If you set this in the "Edit Environment" control, it will
apply to all open image viewers immediately, as well as to any image
viewers opened later.
  It is important to realize that if you use the 'Display Range'
popup to set the bot-top range for the grayscale, these settings
will override the global range UNTIL you switch datasets or switch
sub-bricks within a dataset.  At that point, the global range for
the new volume will be enforced.  This change can be confusing.
Therefore, the info label beneath the slider shows the source of
the bot-top grayscale values:
  [2%-98%]  = from the 2% to 98% points on the slice histogram
  [Min2Max] = from the 0% to 100% points on the slice histogram
  [Glob]    = set from the entire volume min and max values
  [User]    = set by the user from 'Display Range'
  absent    = not applicable (e.g., underlay image is RGB)
The popup 'hint' for the grayscale bar shows the current values
of the bot-top range, if you want to know what numbers correspond
to the image at which you are gazing so fondly.
  Finally, note that when a montage is built, the number-to-grayscale
algorithm is applied to each slice separately, and then the montage
is assembled.  For [2%-98%] and [Min2Max], this fact means that each
slice will (probably) have a separate grayscale conversion range.

----------------------------
Variable: AFNI_DRAW_UNDOSIZE (editable)
----------------------------
This variable sets the size (in units of Megabytes) of the Undo/Redo
buffer in the Draw Dataset plugin.  The default value is 6.  If you
are short on memory, you could set this to 1.  If you are running out
of undo levels, you could set this to a larger value; however, this
would only be needed if you are drawing huge 3D swaths of data at a
time (e.g., using the 3D sphere option with a large radius).

---------------------
Variable: AFNI_SPEECH (editable)
---------------------
If this YES/NO variable is set to NO, then the AFNI speech synthesis
is disabled.  At the current time (Nov 2003), only the Mac OS X 10.3
version of AFNI uses speech synthesis in any way.  And that's just
for fun.

------------------------------
Variable: AFNI_IMAGE_ZEROCOLOR
------------------------------
This variable, if set to the string name of one of the colors in the
color chooser menus (e.g., "Black"), will result in voxels whose value
is 0 being set to this color in the slice viewing windows (except when
viewing RGB images).  The main function is to avoid having to use the
"Choose Zero Color" menu all the time, especially when you use the "Swap"
feature to invert the grayscale map (e.g., to make a T2 weighted image
look sort of like a T1 weighted image).

----------------------------
Variable: AFNI_MPEG_DATASETS
----------------------------
This variable can be used to allow MPEG files to be read in as AFNI datasets.
Such datasets are inherently 3 dimensional.  How they will be organized inside
AFNI depends on the setting of this variable.  The options are:
  SPACE = the frame sequence number will be the z-axis
  TIME  = the frame sequence number will be the time axis
  NO    = MPEG files won't be read as AFNI datasets
          (they can still be read as images into to3d, aiv, etc.)
If this variable is NOT set to anything, then it is the same as SPACE.

MPEG filenames input to AFNI programs (as sources of images or as datasets)
must end in ".mpg", ".MPG", ".mpeg", or ".MPEG".  MPEG datasets will be read
so that the individal images are displayed in an Axial image window.

---------------------------
Variable: AFNI_MPEG_GRAYIZE
---------------------------
If this YES/NO variable is set to YES, then MPEG files read into AFNI,
to3d, or aiv will be converted to grayscale, even if the images in
the movie are in color.

--------------------------
Variable: AFNI_VIDEO_DELAY (editable)
--------------------------
This is the number of milliseconds the AFNI waits between drawing new
images when the 'V' or 'v' keys are pressed in an image (or graph)
window.  The default value is 1, which is faster than video can be
displayed anyway.  Set this to a larger value (e.g, 100) to slow down
the image redraw rate.

----------------------------
Variable: AFNI_IMAGE_ENTROPY (editable)
----------------------------
If this numeric variable is set, this is the entropy of an image below
which the 2%-98% image scaling will be disabled, and min-to-max will
be used instead.  The units are bits/byte; a useful threshold seems to
be in the range (0.2,0.5).  For images that only have a few values
different from 0, the 2%-98% scaling can produce weird artifacts.  Such
images will also have a very low entropy.  Since this variable can be
changed interactively from the Edit Environment controls, you can play
with it to see how it affects your images.

----------------------------
Variable: AFNI_LOGO16 (etc.)
----------------------------
If this variable is set to YES, then the 'AFNI' background logo used in
the controller and image windows will be enabled.  By default, it is off.
You can control the colors of this logo by the following variables:
  AFNI_LOGO16_FOREGROUND_x
  AFNI_LOGO16_BACKGROUND_x
where 'x' is 'A', 'B', 'C', etc., for the various controller labels.
If AFNI_LOGO16_BACKGROUND_x isn't set, then AFNI_LOGO16_BACKGROUND
(with no suffix) is checked as an alternate.  The values of these
variables should be the names of one of the labels on the color chooser
menus (e.g., the "Xhairs Color" menu).  You can use these variables to
make the windows for the various controllers somewhat distinct in
appearance.  If these color variables are not set at all, then AFNI
uses some colors of my choosing for this purpose.

----------------------------------
Variable: AFNI_COLORIZE_CONTROLLER
----------------------------------
If this variable is set to YES, then the background of the AFNI controllers
and image viewers will be colorized.  The default state is that they are not
colorized.

--------------------------
Variable: AFNI_THRESH_LOCK (editable)
--------------------------
This variable can be used to lock the Define Overlay threshold sliders
together.  There are three possibilities:
  NO (the default) => each controller's slider is independent
  VALUE            => the numerical value on each slider will be the same
  P-VALUE          => the p-value for each slider will be the same
This locking only applies to AFNI controllers that are Lock-ed together
(cf. AFNI_ALWAYS_LOCK and the Define Datamode->Lock menu).  If p-values
are locked, this lock will also only apply to controllers whose current
Threshold sub-brick has a known statistical distribution.

When you drag a locked threshold slider, the other one will only change
when you release the mouse button -- they won't slide in tandem, but will
just jump to the final value.

------------------------
Variable: AFNI_PBAR_LOCK (editable)
------------------------
If this variable is set to YES, then the Define Overlay color bars
(the "pbars") of AFNI controllers that are Lock-ed together will be
coordinated.  Changes to one locked pbar will be reflected in the
others immediately.

----------------------------
Variable: AFNI_IMAGE_ZOOM_NN (editable)
----------------------------
If this variable is set to YES, then image viewer windows will use
nearest neighbor interpolation for zooming.  The default is linear
interpolation, which produces smoother-looking images.  However, some
people want to see the actual data values represented in the window,
not some fancy-schmancy interpolated values designed to look good but
in fact making a mockery of a sham of a mockery of a travesty of two
mockeries of a sham of reality.

------------------------------
Variable: AFNI_DISABLE_CURSORS
------------------------------
If this variable is set to YES, then AFNI will not try to change the X11
cursor shape.  This feature is available because it seems that sometimes
particular X11 installations choices of cursor and AFNI's choices don't
work together well.  If you have unpleasant cursors in AFNI (e.g., an X),
try setting this variable to YES.

-----------------------------
Variable: AFNI_SLAVE_FUNCTIME (editable)
-----------------------------
When the underlay and overlay datasets both are time-dependent, switching
the time index will change both the underlay and overlay sub-bricks.  If
you want the time index control to change ONLY the underlay sub-brick,
then set this variable to NO.

----------------------------
Variable: AFNI_SLAVE_THRTIME (editable)
----------------------------
When the underlay and overlay datasets both are time-dependent, switching
the time index will change both the underlay and overlay sub-bricks, but
NOT the threshold sub-brick.  If you want the time index control to change
the threshold sub-brick, then set this variable to YES.

--------------------------------
Variable: AFNI_SLAVE_BUCKETS_TOO
--------------------------------
Set this to YES if you want to make changing the time index in the underlay
dataset change the sub-brick index in the overlay dataset even when the
overlay is a 'bucket' dataset without a time axis.

----------------------------
Variable: AFNI_CLICK_MESSAGE
----------------------------
If this veriable is set to NO, then the string
  [---------------]
  [ Click in Text ]
  [ to Pop Down!! ]
will NOT be appended to the very first popup message window that AFNI
creates.  This message was added because some people do not realize
that the way to get rid of these popups (before they vanish on their
own after 30 seconds) is to click in them.  You know who you are.
However, if you are advanced enough to read this file, then you probably
aren't one of THEM.

-----------------------------
Variable: AFNI_X11_REDECORATE (editable)
-----------------------------
By default, AFNI tries to change some of the "decorations" (control buttons)
on some of the windows it creates (e.g., removing resize handles).  If you
don't want this to happen, set this variable to NO.  This variable only
has an effect on windows created AFTER it is set, so if you change this
interactively in the Edit Environment plugin, it will not affect existing
windows.  Normally, you would want to set this in your .afnirc file.

-------------------------------
Variable: AFNI_IMAGE_SAVESQUARE
-------------------------------
YES/NO: Forces images (from the image view "Save" button) to be saved with
square pixels, even if they are stored with nonsquare pixels.

-------------------------------
Variable: AFNI_BUCKET_LABELSIZE
-------------------------------
THIS VARIABLE HAS BEEN REMOVED FROM AFNI.

Formerly, it was used to set the width of the "ULay", "OLay", and "Thr" menu
choosers on the "Define Overlay" control panel.  As of 03 May 2005, AFNI now
calculates the default width based on the longest sub-brick label input
for each dataset.

-------------------------
Variable: AFNI_MAX_1DSIZE
-------------------------
Sets the maximum size (in bytes) of each 1D file that will be automatically
loaded when AFNI starts.  The default is 123 Kbytes.  The intention is to
prevent loading of very large files that are not intended to be used for
graphing/FIMming purposes.

---------------------------
Variable: AFNI_TITLE_LABEL2 (editable)
---------------------------
If this YES/NO variable is YES, then the AFNI window titlebars will show
the 'label2' field from the AFNI dataset .HEAD file, rather than the
dataset filename.  If the label2 field is set to a nontrivial value,
that is.  You can set the label2 field with the 3drefit command.

-------------------------------
Variable: AFNI_SHOW_SURF_POPUPS
-------------------------------
If this YES/NO variable is set to YES, then when AFNI receives surface
nodes, triangles or normals from suma, a popup message will be displayed.
Otherwise, the message will be send to stderr (on the terminal window).

-------------------------------
Variable: AFNI_KILL_SURF_POPUPS
-------------------------------
If this YES/NO variable is set to YES, then when AFNI receives surface
nodes, triangles or normals from suma, no messages will be displayed,
either in a popup or stderr.  Note that if errors occur, popups will
still be shown; this just turns off the normal information messages.
N.B.: If AFNI_SHOW_SURF_POPUPS is YES, then it wins over
      AFNI_KILL_SURF_POPUPS being YES.  If neither is set, then
      messages are displayed to stderr.

-----------------------------
Variable: AFNI_EDGIZE_OVERLAY (editable)
-----------------------------
If set to YES, then the color overlays in the image windows will only
have their edge pixels displayed.  That is, each 'blob' will be hollowed
out, leaving only its edges.  If you do this, you probably want to make
the color overlay opaque, so that the results are easily seen.

--------------------------
Variable: AFNI_NIFTI_DEBUG (editable)
--------------------------
This integral variable determines the debug level used by the nifti_io
library functions.  If set to 0, only errors are reported by the library.
The maximum debug level used is currently 4.  Note that if this is changed
from within AFNI, a 'Rescan: This' operation should probably be performed,
which will force a re-reading of the datasets and so force an elicitation
of the NIfTI debug messages (for .nii files, that is).

--------------------------
Variable: AFNI_NIFTI_NOEXT
--------------------------
When writing a '.nii' (or '.nii.gz') file from an AFNI program, normally
a NIfTI-1.1 extension field with some extra AFNI header information is
written into the output file.  If you set this variable to YES, then
this extension is not written, which will make the output be a 'pure'
NIfTI-1.1 file.  Only use this if absolutely necessary.  You can also
use the 'nifti_tool' program to strip extension data from a NIfTI-1.1
dataset file.

---------------------------
Variable: AFNI_OVERLAY_ZERO (editable)
---------------------------
If set to YES, this variable indicates that voxels in the overlay dataset
that have the numerical value of 0 will get colored when the Inten color
scale on the Define Datamode panel indicates that 0 has a color that isn't
"none".  The default way that AFNI works is NOT to colorize voxels that
are 0, even if they should otherwise get a color.

---------------------------
Variable: NIML_TRUSTHOST_xx
---------------------------
These environment variables ('xx' = '01', '02', ..., '99') set the names
and/or addresses of external computer hosts to trust with NIML TCP/IP
connections, which are how AFNI and SUMA communicate.  Should only be
necessary to use these if you are using AFNI and SUMA on different
machines.  Connections from machines not on the trusted list will be
rejected, for the sake of security.  The 'localhost' or 127.0.0.1 address
and local class B network 192.168.0.* addresses are always trusted.

---------------------------
Variable: AFNI_DONT_LOGFILE
---------------------------
Most AFNI programs write a copy of their command line to a file in your
home directory named ".afni.log".  If you do NOT want the log to be
kept, set this environment variable to YES.  The purpose of the log
is for you to be able to look back and see what AFNI commands you used
in the past.  However, if you are doing a vast number of commands inside
a script, the log file might eventually become gigantic (the Kevin Murphy
effect).

-------------------------
Variable: AFNI_WRITE_NIML
-------------------------
If this variable is set to YES, then AFNI .HEAD files will be written in
the new NIML (XML subset) format, rather than the 'classic' format.  The
volumetric image data is still in the pure binary .BRIK file, not XML-ified
in any way.  At present (Jun 2005) this format is experimental, but will
someday soon become the default.

---------------------------------
Variable: AFNI_ALLOW_MILLISECONDS
---------------------------------
The TR value (time step) in 3D+time datasets created with to3d can be flagged
as being in units of milliseconds (ms) or seconds (s).  This situation is
unfortunate, as some AFNI programs assume that the units are always s, which
doesn't work well when the TR is actually in ms.  On 15 Aug 2005, AFNI dataset
I/O was modified to only write out TR in s units, and to convert ms units
to s units on input.  If you absolutely need to store TR in ms, then you
must set this environment variable to YES.  I strongly recommend against
such a setting, but recall the AFNI philosophy: "provide mechanism, not
policy" -- in other words, if you want to shoot yourself in the foot, go
right ahead.  This variable is just the safety on the revolver.

------------------------------
Variable: AFNI_TTATLAS_CAUTION (editable)
------------------------------
If this YES/NO variable is set to NO, then the warning about the potential
errors in the "Where am I?" popup will not appear.  This is purely for
cosmetic purposes, Ziad.

--------------------------
Variable: AFNI_AUTO_RESCAN
--------------------------
If this YES/NO variable is set to YES, then the interactive AFNI program
will rescan all session directories every 15 seconds for new datasets.
Basically, this is just a way for you to avoid pressing the 'Rescan'
buttons.  Note that if AFNI_AUTO_RESCAN is enabled, then the rescan
method will be 'Add', not 'Replace', no matter what you set variable
AFNI_RESCAN_METHOD to.

-------------------------------
Variable: AFNI_RESCAN_AT_SWITCH
-------------------------------
If this YES/NO variable is set to YES, then the interactive AFNI program
will rescan all session directories everytime you click on either of the
'Overlay' or 'Underlay' buttons. Basically, this is just another way for
you to avoid pressing the 'Rescan' buttons.  (Unlike with AFNI_AUTO_RESCAN,
the AFNI_RESCAN_METHOD settings are respected.)

--------------------------
Variable: AFNI_WEB_BROWSER
--------------------------
This variable should be set to the full executable path to a Web browser,
as in
  setenv AFNI_WEB_BROWSER /usr/bin/mozilla
If it is not set, AFNI will scan your path to see if it can find a browser,
looking for "firefox", "mozilla", "netscape", and "opera" (in that order).
If a browser is found, or set, then the 'hidden' popup menu (in the blank
square to the right of the 'done' button) will have a menu item to open it.

----------------------------
Variable: AFNI_JPEG_COMPRESS
----------------------------
This variable determines the compression quality of JPEG files saved in
the AFNI GUI and 3dDeconvolve. Its value can be set to an integer from
1 to 100. If not set, the default value is 95%.

---------------------------
Variable: AFNI_NLFIM_METHOD
---------------------------
Can be used to set the optimization method using in the NLfit plugin
(not in 3dNLfim).  The methods available are
  SIMPLEX (the default)
  POWELL  (the NEWUOA method)
  BOTH    (use both methods, choose the 'best' result)

----------------------------
Variable: AFNI_OVERLAY_ONTOP
----------------------------
If this variable is set to YES, then the 'Overlay' button will be above
the 'Underlay' button on the AFNI control panel.  The default, from the
olden days, is to have the 'Underlay' button above the 'Overlay' button,
which some people find confusing.

-----------------------------
Variable: AFNI_DATASET_BROWSE (editable)
-----------------------------
If this variable is set to YES, then when you 'browse' through a dataset
chooser ('Overlay' or 'Underlay' list) with the mouse or arrow keys, then
as a dataset is selected in the list, AFNI will immediately switch to
viewing that dataset.  This can be convenient for scrolling through datasets,
but can also consume memory and CPU time very quickly.

------------------------------
Variable: AFNI_DISABLE_TEAROFF
------------------------------
If this variable is set to YES, then the AFNI GUI will not allow popup or
popdown menus to be 'torn off'.  The default is to enable tear off for most
menus, but this may cause bad things on some platforms (like program death).

-------------------------------
Variable: AFNI_PLUGOUT_TCP_BASE
-------------------------------
This integer will override the base TCP port used by afni to listen for
plugouts.  This allows multiple instances of afni on one machine, where
each can listen for plugouts.  Valid port numbers are 1024..65535.

-----------------------------------
Variable: AFNI_IMAGE_TICK_DIV_IN_MM (editable)
-----------------------------------
If this YES/NO variable is set to YES, then the Tick Div. value in an
image window will be interpreted as a separation distance, in mm, as
opposed to the number of tick divisions along each edge.  In the YES case,
a larger value would produce fewer ticks, as they would be farther apart.
In the NO case, a larger value will produce more tick marks.  Tick marks
are controlled from the Button 3 popup menu attached to the grayscale
intensity bar in an image viewer.

----------------------------
Variable: AFNI_IMAGRA_CLOSER
----------------------------
If this YES/NO variable is set to YES, then when you click in an 'Image'
or 'Graph' button for a viewer window that is already open (so the button
is displayed in inverted colors), then the corresponding viewer window
will close.  The default action is to try to raise the viewer window to
the front, but some window managers (I'm looking at you, FC5) don't allow
this action.  So this provides a way to kill the window, at least, if you've
lost it in desktop hell somewhere.

-------------------------
Variable: AFNI_DECONFLICT
-------------------------
When AFNI programs write datasets to disk, they will check whether the
output filename already exists.  If it does, the AFNI programs will act
based on the possible values of AFNI_DECONFLICT as follows:
    NO/   : do not modify the name or overwrite the file, but
                  inform the user of the conflict, and exit
    YES         : modify the filename, as stated below
    OVERWRITE   : do not modify the filename, overwrite the dataset
If AFNI_DECONFLICT is YES, then the filename will be changed to one that
does not conflict with any existing file.  For example 'fred+orig' could
be changed to 'fred_AA1+orig'.

The default behavior is as 'NO', not to deconflict, but to exit.
Some programs supply their own default.

--------------------------
Variable: AFNI_SEE_OVERLAY
--------------------------
If this variable is set to YES, then the 'See Overlay' button will be turned
on when a new AFNI controller is opened.

------------------------------
Variable: AFNI_INDEX_SCROLLREV
------------------------------
If this variable is set to YES, then the default direction of image slice
and time index scrolling will be reversed in the image and graph viewers,
respectively.

-----------------------------
Variable: AFNI_CLUSTER_PREFIX
-----------------------------
This variable sets the prefix for 'Save' timeseries 1D files from the
'Clusterize' report panel.  The default string is "Clust".  The value
of this variable will be loaded into the cluster Rpt window text entry
field, and the prefix can be edited there by the users when it comes
time to save files.

-----------------------------
Variable: AFNI_CLUSTER_SCROLL
-----------------------------
If this variable is NO, then the 'Clusterize' report will not be given
scrollbars.  The default is to give it scroll bars (i.e., YES).

-----------------------------
Variable: AFNI_CLUSTER_REPMAX  (editable)
-----------------------------
This numeric variable (between 10 and 9999, inclusive) sets the maximum
number of clusters that will be reported in a 'Clusterize' report panel,
if scroll bars are turned off by AFNI_CLUSTER_SCROLL. The default value
is 15.  If scroll bars are turned on, then the maximum number of clusters
shown defaults to 999, but can be increased to 9999 if you are completely
mad, or are named Shruti.  If scroll bars are turned off, then you probably
don't want to make this very big, since the report window would become
taller than your monitor, and that would be hard to deal with.

----------------------------
Variable: AFNI_STRLIST_INDEX
----------------------------
If this variable is set to NO, then the new [12 Oct 2007] 'Index' selector
at the bottom of a string-list chooser (e.g., the 'Overlay' button popup
window) will NOT be shown.

-----------------------------
Variable: AFNI_HISTOG_MAXDSET
-----------------------------
If this variable is set to a numeric value between 4 and 9 (inclusive),
then the number of Source datasets in the 'Histogram: Multi' plugin
will be set to this value.  The default number of Source datasets is 3 --
this variable allows you to increase that setting.

----------------------------
Variable: AFNI_SIGQUIT_DELAY
----------------------------
This numeric variable (between 1 and 30) sets the number of seconds AFNI
will delay before exiting after a SIGQUIT signal is delivered to the process.
The default delay is 5 seconds.  If you deliver a SIGALRM signal, AFNI will
exit immediately.  If you don't know what Unix signals are, then don't pay
any attention to this subject!

--------------------------------
Variable: AFNI_NEVER_SAY_GOODBYE
--------------------------------
If this variable is set to YES, then the AFNI 'goodbye' messages won't
be printed when the program exits.  For the grumpy people out there
(you know who I'm talking about, don't you, Daniel?).

--------------------------------
Variable: AFNI_NEWSESSION_SWITCH
--------------------------------
If this variable is set to NO, then AFNI will not automatically switch
to a new session after that session is read in using the 'Read Sess'
button on the Datamode control panel.

-------------------------------
Variable: AFNI_FLASH_VIEWSWITCH
-------------------------------
If you switch sessions, underlay, or overlay, it can happen that the
coordinate system might be forced to switch from +orig to +tlrc
(for example) because there is no dataset to view in the +orig system.
Normally, AFNI flashes the view switch buttons on and off a few times
to let you know this is happening (this is the Adam Thomas feature).
You can turn this feature off, by setting this variable to NO.

-------------------------
Variable: AFNI_SHELL_GLOB
-------------------------
'Globbing' is the Unix jargon for filename wildcard expansion.  AFNI programs
do globbing at various points, using an adaptation of a function from the
csh shell.  This function has been reported to fail on Mac OS X Server 10.5
on network mounted directories.  If you set this variable to YES, then globbing
will instead be done using the shell directly (via popen and ls).  You should
only set this variable if you really need it, and understand the issue!
[For Graham Wideman]

----------------------------------
Variable: AFNI_IGNORE_BRICK_FLTFAC
----------------------------------
Under some very rare circumstances, you might want to ignore the brick scaling
factors.  Set this variable to YES to do so.  WARNING: this is dangerous, so
be sure to unset this variable when you are done.  Sample usage:
  3dBrickStat -DAFNI_IGNORE_BRICK_FLTFAC=YES -max fred+orig

----------------------------------------------------------------------
--- variables specific to NIML I/O
----------------------------------------------------------------------

-----------------------------------
Variable: AFNI_NIML_DEBUG
-----------------------------------
This integer sets the debugging level in some niml I/O routines, particularly
those in thd_niml.c.  Currently used values range from 0 to 3.

-----------------------------------
Variable: AFNI_NSD_ADD_NODES
-----------------------------------
If this YES/NO variable is set to YES, then when a NI_SURF_DSET dataset
is written to disk, if it has no node list attribute, a default list will
be created.

-----------------------------------
Variable: AFNI_NSD_TO_FLOAT
-----------------------------------
If this YES/NO variable is set to NO, then any necessary conversion of
NI_SURF_DSET datasets to type float will be blocked.  Otherwise, all such
datasets will be written as float.

-----------------------------------
Variable: AFNI_NIML_TEXT_DATA
-----------------------------------
If this YES/NO variable is set to YES, then NI_SURF_DSET datasets will be
written with data in text format.  Otherwise, data will be in binary.

-----------------------------
Variable: AFNI_SIMPLE_HISTORY
-----------------------------
A few programs (particularly 3dcalc) create a complicated history note
in the output dataset header, by including the history of all inputs.
This history can become inordinately long and pointless when 3calc is
run in a long chain of calculations.  Setting this variable to YES will
turn off this cumulation of all histories, and may make your dataset
headers more manageable.

-------------------------------------------
Variable: AFNI_NIML_BUFSIZE or NIML_BUFSIZE
-------------------------------------------
This variable sets the number of bytes used as a memory buffer for
NIML dataset input.  If you are inputting gigantic headers or gigantic
String data components (I'm looking at YOU, Ziad), then you may want
to increase this past its default size of 255*1024=261120.

----------------------------------------------------------------------
--- END: variables specific to NIML I/O
----------------------------------------------------------------------

----------------------------
Variable: AFNI_GIFTI_VERB
----------------------------
This integer sets the verbose level in the gifti I/O library routines.
Level 1 is the default, 0 is "quiet", and values go up to 7.

----------------------------
Variable: AFNI_DATASETN_NMAX
----------------------------
This numeric variable, if set, lets you expand the number of dataset
lines in the 'Dataset#N' plugin from the default of 9 up to a max of 49.
(This one is for Shruti.)

----------------------------
Variable: AFNI_WRITE_1D_AS_PREFIX
----------------------------
If this variable is set to YES, then 1D formatted files will be written
to the file based on the given prefix, rather than to an automatic 1D file.
This allows writing surface files to NIfTI format, for example.

=============================================
| Robert W Cox, PhD                         |
| Scientific and Statistical Computing Core |
| National Institute of Mental Health       |
| National Institutes of Health             |
| Department of Health & Human Services     |
| United States of America                  |
| Earth, United Federation of Planets       |
=============================================

############################################################################
##### Variables that Specifically Affect the Operation of 3dDeconvolve #####
############################################################################

-----------------------------------
Variable: AFNI_3dDeconvolve_GOFORIT
-----------------------------------
If this variable is set to YES, then 3dDeconvolve behaves as if you used the
'-GOFORIT' option on the command line -- that is, it will continue to run
even if it detects serious non-fatal problems with the problem setup.

--------------------------------
Variable: AFNI_3dDeconvolve_NIML
--------------------------------
3dDeconvolve outputs the regression matrix 'X' into a file formatted in
the 'NIML' .1D format -- with an XML-style header in '#' comments at the
start of the file.  If you DON'T want this format, just plain numbers,
set this variable to NO.

----------------------------------
Variable: AFNI_3dDeconvolve_extend
----------------------------------
If you input a stimulus time series (via the -stim_file option) to
3dDeconvolve that is shorter than needed for the regression analysis, the
program will normally print a warning message and extend the time series
with zero values to the needed length.  If you would rather have the program
stop if it detects this problem (the behavior before 22 Oct 2003), then set
this environment variable to NO.

---------------------------------
Variable: AFNI_3dDeconvolve_nodup
---------------------------------
If this variable is set to YES, then if the 3dDeconvolve program detects
duplicate input stimulus filenames or duplicate regressors, the program
will fail (with an error message) rather than attempt to continue.

-----------------------------------------
Variable: AFNI_3dDeconvolve_nodata_extras
-----------------------------------------
When using the -nodata option in 3dDeconvolve, the default printout gives
the 'normalized standard deviation' for each stimulus parameter.  If you
set this variable to YES, then the printout will include the -polort
baseline parameters as well, and also the L2 norm of each column in
the regression matrix.

-----------------------------------
Variable: AFNI_3dDeconvolve_oneline
-----------------------------------
3dDeconvolve outputs a command line for running the cognate 3dREMLfit
program.  By default, this command line is line broken with '\'
characters for printing beauty.  If you want this command line
to be all on one physical output line, for convenience in automatic
extraction (e.g., via grep), then set this variable to YES before
running the program.

--------------------------
Variable: AFNI_XJPEG_COLOR
--------------------------
Determines the color of the lines drawn between the column boxes in the
output from the -xjpeg option to 3dDeconvolve.  The color format is
"rgbi:rf/gf/bf", where each value rf,gf,bf is a number between 0.0 and 1.0
(inclusive); for example, yellow would be "rgbi:1.0/1.0/0.0".  As a
special case, if this value is the string "none" or "NONE", then these
lines will not be drawn.

-------------------------
Variable: AFNI_XJPEG_IMXY
-------------------------
This variable determines the size of the image saved when via the -xjpeg
option to 3dDeconvolve.  It should be in the format AxB, where 'A' is the
number of pixels the image is to be wide (across the matrix rows) and 'B'
is the number of pixels high (down the columns); for example:
  setenv AFNI_XJPEG_IMXY 768x1024
which means to set the x-size (horizontal) to 768 pixels and the y-size
(vertical) to 1024 pixels.  These values are the default, by the way.

If the first value 'A' is negative and less than -1, its absolute value
is the number of pixels across PER ROW.  If the second value 'B' is
negative, its absolute value is the number of pixels down PER ROW.
(Usually there are many fewer columns than rows.)

-------------------------
Variable: AFNI_XSAVE_TEXT
-------------------------
If this YES/NO variable is set to YES, then the .xsave file created by
the "-xsave" option to 3dDeconvolve will be saved in text format.  The
default is a binary format, which preserves the full accuracy of the
matrices stored therein.  However, if you want to look at the .xsave
file yourself, the binary format is hard to grok.  Note that the two
forms are not quite equivalent, since the binary format stores the
exact matrices used internally in the program, whereas the ASCII format
stores only a decimal approximation of these matrices.

---------------------------
Variable: AFNI_GLTSYM_PRINT
---------------------------
If this YES/NO variable is set to YES, then the GLT matrices generated
in 3dDeconvolve by the "-gltsym" option will be printed to the screen
when the program starts up.

-----------------------
Variable: AFNI_FLOATIZE
-----------------------
If this YES/NO variable is set to YES, then 3dDeconvolve and 3dcalc
will write their outputs in floating point format (unless they are
forced to do otherwise with the '-datum short' type of option).  In
the future, other programs may also be affected by this variable.
Later [18 Nov 2008]: Now 3dANOVA, 3dANOVA2, and 3dANOVA3 will also
use this flag to determine if their outputs should be written in
float format.  For example:
  3dANOVA -DAFNI_FLOATIZE=YES ... other options ...

----------------------------
Variable: AFNI_AUTOMATIC_FDR
----------------------------
If this variable is set to NO, then the automatic computation of FDR
curves into headers output by 3dDeconvolve, 3dANOVA, 3dttest, and
3dNLfim will NOT be done.  Otherwise, the automatic FDR-ization of
these datasets will performed when the datasets are written to disk.
(You can always use '3drefit -addFDR' to add FDR curves to a dataset
header, for those sub-bricks marked as statistical parameters.)



AFNI file: README.func_types
Anatomical Dataset Types
========================
First, you must realize that I (and therefore AFNI) consider
the raw functional image time series to be "anatomical" data.
Only after processing does it show functional information.
For this reason you should create your 3D+time datasets as
one of the anatomical types.

No AFNI program (at this time) uses the actual anatomical
dataset type (e.g., SPGR or EPI) for any purpose.  This type
information is only for your convenience.

Functional Dataset Types
========================
In contrast, the functional dataset type is very meaningful
to the AFNI software.  At present (23 July 1997), there are 11
functional dataset types.  (The first five are documented in
"afni_plugins.ps".)

The first type ("fim") stores a single number per voxel.  All the
others store 2 numbers per voxel.  The second type ("fith") is
obsolescent, and will not be discussed further here.

The remaining types differ in the interpretation given to their
second sub-brick values.  In each case, the second value is
used as a threshold for functional color overlay.  The main
difference is the statistical interpretation given to each
functional type.  The types are

 Name  Type Index     Distribution        Auxiliary Parameters [stataux]
 ----  -------------  -----------------  -----------------------------------
 fico  FUNC_COR_TYPE  Correlation Coeff. # Samples, # Fit Param, # Ort Param
 fitt  FUNC_TT_TYPE   Student t          Degrees-of-Freedom (DOF)
 fift  FUNC_FT_TYPE   F ratio            Numerator DOF, Denominator DOF
 fizt  FUNC_ZT_TYPE   Standard Normal    -- none --
 fict  FUNC_CT_TYPE   Chi-Squared        DOF
 fibt  FUNC_BT_TYPE   Incomplete Beta    Parameters "a" and "b"
 fibn  FUNC_BN_TYPE   Binomial           # Trials, Probability per trial
 figt  FUNC_GT_TYPE   Gamma              Shape, Scale
 fipt  FUNC_PT_TYPE   Poisson            Mean

These were chosen because the needed CDF and inverse CDF routines
are found in the "cdf" library from the University of Texas.

When creating a dataset of these types, you will probably want to
store the threshold sub-brick as shorts, to save disk space.  You then
need to attach a scale factor to that sub-brick so that AFNI programs
will deal with it properly.  If you store it as shorts, but do not
supply a scale factor, AFNI will supply one.

 Name  Short Scale  Slider Top
 ----  -----------  ----------
 fico    0.0001          1.0
 fitt    0.001          10.0
 fift    0.01          100.0
 fizt    0.001          10.0
 fict    0.01          100.0
 fibt    0.0001          1.0
 fibn    0.01          100.0
 figt    0.001          10.0
 fipt    0.01          100.0

The default scale factor is useful for some types, such as the fico and
fibt datasets, where the natural ranges of these thresholds is fixed to
[-1,1] and [0,1], respectively.  For other types, the default scale factor
may not always be useful.  It is a good practice to create an explicit
scale factor for threshold sub-bricks, even if the default is acceptable.

The table above also gives the default value that AFNI will use for the
range of the threshold slider.  AFNI now allows the user to set the range
of this slider to be from 0 to 10**N, where N=0, 1, 2, or 3.  This is to
allow for dataset types where the range of the threshold may vary
substantially, depending on the auxiliary parameters.  The user can now
switch the range of the threshold slider to encompass the threshold range
shown to the right of the overlay color selector/slider.  At this time
there is no way to have the range of the threshold slider set automatically
to match the values in the dataset -- the user must make the switch
manually.

Distributional Notes
====================
fico: (Correlation coefficient)**2 is incomplete-beta distributed, so
      the fibt type is somewhat redundant, but was included since the
      "cdf" library had the needed function just lying there.

fizt: This is N(0,1) distributed, so there are no parameters.

fibn: The "p-value" computed and displayed by AFNI is the probability
      that a binomial deviate will be larger than the threshold value.

figt: The PDF of the gamma distribution is proportional to
         x**(Shape-1) * exp(-Scale * x)
      (for x >= 0).

fipt: The "p-value" is the probability that a Poisson deviate is larger
      than the threshold value.

For more details, see Abramowitz and Stegun (the sacred book for
applied mathematicians), or other books on classical probability
distributions.

The "p-values" for fico, fitt, and fizt datasets are 2-sided: that is,
the value displayed by AFNI (below the slider) is the probability that
the absolute value of such a deviate will exceed the threshold value
on the slider.  The "p-values" for the other types are 1-sided: that is,
the value displayed by AFNI is the probability that the value of the
deviate will exceed the threshold value.  (Of course, these probabilities
are computed under the appropriate null hypothesis, and assuming that
the distributional model holds exactly.  The latter assumption, in
particular, is fairly dubious.)

Finally, only the fico, fitt, fift, fizt, and fict types have actually
been tested.  The others remain to be verified.

Bucket Dataset Types (new in Dec 1997)
======================================
The new bucket dataset types (`abuc' == ANAT_BUCK_TYPE, and
`fbuc' == FUNC_BUCK_TYPE) can contain an arbitrary number of sub-bricks.
In an fbuc dataset, each sub-brick can have one of the statistical types
described above attached to it.

================================
Robert W. Cox, PhD
Biophysics Research Institute
Medical College of Wisconsin



AFNI file: README.notes
Programming Information for Notes and History
=============================================
The Notes and History attributes in dataset headers are manipulated by
the following routines in file thd_notes.c (which is compiled into the
AFNI library libmri.a).  All functions that return a string (char *)
return a copy of the information requested.  This string will have
been malloc()-ed and should be free()-ed when it is no longer needed.

Notes are numbered 1, 2, ..., up to the value returned by
tross_Get_Notecount().  Note are always numbered contiguously.
The maximum number of Notes per dataset is 999.

Programs and plugins that create new datasets should also create a
History for the dataset, using one of the methods described below.
----------------------------------------------------------------------
int tross_Get_Notecount( THD_3dim_dataset * dset );

This routine returns the number of Notes stored in dataset "dset".
If -1 is returned, dset is not a valid dataset pointer.  If 0 is
returned, the dataset has no Notes at this time.
----------------------------------------------------------------------
char * tross_Get_Note( THD_3dim_dataset * dset, int inote );

This routine returns a copy of the "inote"-th Note in dataset "dset".
If NULL is returned, some error occurred (e.g., you asked for a non-
existent Note).
----------------------------------------------------------------------
char * tross_Get_Notedate( THD_3dim_dataset * dset, int inote );

This routine returns a string with the date that the "inote"-th Note
in dataset "dset" was created.  If NULL is returned, an error
occurred.
----------------------------------------------------------------------
void tross_Add_Note( THD_3dim_dataset *dset, char *cn );

This routine adds the string stored in "cn" to the dataset "dset".
A new Note is created at the end of all existing Notes.
----------------------------------------------------------------------
void tross_Store_Note( THD_3dim_dataset * dset, int inote, char * cn );

This routine stores string "cn" into dataset "dset" as Note number
"inote".  If this Note already exists, then it is replaced by the new
text.  If this Note number does not exist, then the new Note is
created by called tross_Add_Note(), which means that it's number may
not end up as "inote".
----------------------------------------------------------------------
void tross_Delete_Note(THD_3dim_dataset *dset, int inote);

This routine removes the "inote"-th Note from dataset "dset".  Any
notes above this Note are renumbered downwards by 1.
----------------------------------------------------------------------
char * tross_Get_History( THD_3dim_dataset *dset );

This function returns a copy of the History Note for dataset "dset".
----------------------------------------------------------------------
void tross_Make_History( char * pname, int argc, char ** argv,
                                       THD_3dim_dataset *dset );

This routine uses tross_commandline() to make an entry in the History
Note for dataset "dset".  If no History Note currently exists for
this dataset, one is created; otherwise, the command line is appended
to the History Note.
----------------------------------------------------------------------
void tross_Copy_History( THD_3dim_dataset * old_dset,
                         THD_3dim_dataset * new_dset );

This routine erases the History Note in dataset "new_dset" and
replaces it with the History Note in dataset "old_dset".  By combining
this routine with tross_Make_History(), a cumulative history of the
commands that led up to a dataset can be maintained.  The existing
AFNI programs use this function when creating a dataset from a single
input dataset (e.g., 3dmerge with one input), but do NOT use this
function when a dataset is created from many inputs (e.g., 3dmerge
with several input datasets being averaged).
----------------------------------------------------------------------
void tross_Append_History( THD_3dim_dataset *dset, char *cn );

This function appends the string "cn" to the History Note in dataset
"dset".  If you use tross_Make_History(), you don't need to use this
routine - it is only necessary if you have some custom history to add.
This routine adds the "[date time] " string to the front of "cn"
before storing it into the History Note.
----------------------------------------------------------------------
void tross_multi_Append_History( THD_3dim_dataset *dset, ... );

This function is like the previous one, but takes an arbitrary number
of strings as input.  Its usage is something like
  tross_multi_Append_History(dset,str1,str2,str3,NULL);
where each 'str' variable is of type char *.  The last input must
be NULL.  The strings are concatenated and then tross_Append_History
is invoked on the result.
----------------------------------------------------------------------
char * tross_commandline( char * pname, int argc, char ** argv );

This routine is designed to produce an approximate copy of the command
line used to invoke a program.
  pname = Program name
  argc  = argc from main()
  argv  = argv from main()
This function is invoked by tross_Make_History() and so doesn't often
need to be called directly by an AFNI program.
----------------------------------------------------------------------
char * tross_datetime(void);

This routine produces a string with the current date and time.  It
does not usually need to be called directly by an AFNI program.
----------------------------------------------------------------------
char * PLUTO_commandstring( PLUGIN_interface * plint );

This function (in afni_plugin.c) is used from within a plugin to
create a History string for storage in a dataset.  It is something
like tross_commandline(), in that it will produce a line that will
summarize how the plugin was run.  PLUTO_commandstring() can only
be invoked from plugins using standard (AFNI-generated) interfaces -
plugins that create there own interfaces must create their own
History as well.  A sample use of this function:

    char * his ;
    his = PLUTO_commandstring(plint) ;
    tross_Copy_History( old_dset , new_dset ) ;
    tross_Append_History( new_dset , his ) ;
    free(his) ;

This is for a plugin that is manipulating the input "old_dset" to
create the output "new_dset".  This example is drawn directly from
plug_power.c (the Power Spectrum plugin).
----------------------------------------------------------------------



AFNI file: README.permtest
The following is the README file for the permutation test plugins written
by Matthew Belmonte.  This code has been released under the GPL.
------------------------------------------------------------------------------
This directory contains plug_permtest.c and plug_threshold.c, source modules
for the AFNI Permutation Test and Threshold plugins, respectively.  The
threshold plugin separates brain from non-brain (with touch-up work being
handled by the Draw Dataset plugin), and the Permutation Test plugin evaluates
activations for statistical significance using a sensitive, nonparametric
algorithm.  To build both modules, place them in your AFNI source code
directory and type "make plug_permtest.so" and "make plug_threshold.so".
If you use this software in your research, please take a moment to send mail to
the author, belmonte@mit.edu, and cite the following paper in your report:

Matthew Belmonte and Deborah Yurgelun-Todd, `Permutation Testing Made Practical
for Functional Magnetic Resonance Image Analysis', IEEE Transactions on Medical
Imaging 20(3):243-248 (2001).

The permutation test takes a lot of memory and a lot of CPU.  You'll want to
use the fastest processor and system bus that you can lay your hands on, and at
least 256MB of memory.  If you're using Digital UNIX, you may find that the
plugin will be unable to allocate all the memory that it needs unless you
increase the values of the following kernel parameters:
max-per-proc-address-space, per-proc-data-size, max-per-proc-data-size,
per-proc-address-space, max-per-proc-address-space.  To change these parameters,
use the command-line tool "dxkerneltuner", or the graphical interface
"sysconfig".



AFNI file: README.plugouts
Plugout Instructions
--------------------
A "plugout" is a external program that communicates with AFNI
using IPC shared memory or TCP/IP sockets.  There are 3 sample
plugouts distributed with AFNI; the filenames all start with
"plugout_".  At present, I don't have the energy to document
the plugout protocol for talking to AFNI, so the sample programs
will have to do.

Bob Cox
Biophysics Research Institute / Medical College of Wisconsin
Voice: 414-456-4038 / Fax: 414-266-8515 / rwcox@mcw.edu
http://www.biophysics.mcw.edu/BRI-people/rwcox/cox.html



AFNI file: README.realtime

================================================
Realtime AFNI control information: What it needs
================================================
AFNI needs some information about the acquisition in order to properly
construct a dataset from the images.  This information is sent to AFNI
as a series of command strings.  A sample set of command strings is
given below:

   ACQUISITION_TYPE 2D+zt
   TR 5.0
   XYFOV 240.0 240.0 112.0
   ZNUM 16
   XYZAXES S-I A-P L-R
   DATUM short
   XYMATRIX 64 64

The commands can be given in any order.  Each command takes up a single
line of input (i.e., commands are separated by the '\n' character in the
input buffer, and the whole set of commands is terminated by the usual '\0').
Each command line has one or more arguments.  The full list of possible
command strings and their arguments is:

ACQUISITION_TYPE arg
  This command tells AFNI how the image data will be formatted:
    arg = 2D+z   -> a single 3D volume, one slice at a time
          2D+zt  -> multiple 3D volumes, one slice at a time [the default]
          3D     -> a single 3D volume, all at once
          3D+t   -> multiple 3D volumes, one full volume at a time
 *This command is not required, since there is a default.

NAME arg
or
PREFIX arg
  This command tells AFNI what name to use for the new dataset.
 *It is not required, since AFNI will generate a name if none is given.

TR arg
  This command tells AFNI what the imaging TR is, in seconds. The default
  value, if this command is not given, is 1.0.
 *It is recommended that this command be used, so that the dataset has
  the correct header information.  But this command is not required.

ZDELTA dz
  This command tells AFNI the slice thickness, in mm.
 *This command, or the next one, MUST be used, so that the correct
  size of the dataset along the z-axis size known.

XYFOV xx yy [zz]
  This command tells AFNI the size of the images, in mm.  The first
  value ('xx') is the x-axis dimension, and the second value ('yy') is
  the y-axis dimension.  If the third value ('zz') is present, then it
  is the z-axis dimension (slab thickness of all slices).
 *This command MUST be used to at least to give the sizes of the dataset
  along the x- and y-axes.  If 'zz' is not given, then the ZDELTA command
  is also required.
 *If 'yy'==0, then it is taken to be the same as 'xx' (square images).

ZFIRST zz[d]
  Specifies the location of the first slice, along the z-axis, in mm.
  The value 'zz' gives the offset.  The optional code 'd' gives the
  direction that distance 'zz' applies.  The values allowed for the
  single character 'd' are
    I = inferior
    S = superior
    A = anterior
    P = posterior
    R = right
    L = left
 *This command is optional - if not given, then the volume will be
  centered about z=0 (which is what always happens for the x- and
  y-axes).  If the direction code 'd' is given, then it must agree
  with the sense of the z-axis given in the XYZAXES command.
  When more than one dataset is being acquired in a scanning session,
  then getting ZFIRST correct is important so that the AFNI datasets
  will be properly positioned relative to each other (e.g., so you
  can overlay SPGR and EPI data correctly).

XYZFIRST xx[d] yy[d] zz[d]
  This new option (10 Dec 2002) lets you set the offsets of the dataset
  volume on all 3 axes.  It is very similar to ZFIRST above, but you
  give values for all axes.  For example:
    XYZAXES  S-I A-P L-R
    XYZFIRST 30 20A 50R
  sets the x-origin to 30S (since no direction code was given for x),
       the y-origin to 20A, and
       the z-origin to 50R.  Since the z-axis is L-R and starts in the
  R hemisphere, these sagittal slices are all in the R hemisphere.  If
  the 'R' code had been left off the '50R', then the z-origin would have
  been set to 50L.  Note that the origin is the CENTER of the first voxel.
 *This command is optional.  If it is given along with ZFIRST (why?), then
  whichever one comes last wins (for the z-axis).

XYMATRIX nx ny [nz]
  Specifies the size of the images to come, in pixels:
    nx = number of pixels along x-axis
    ny = number of pixels along y-axis
    nz = number of pixels along z-axis (optional here)
 *This command is required.  If 'nz' is not given here, then it must
  be given using the ZNUM command.

ZNUM nz
  Specifies the number of pixels along the z-axis (slice direction).
 *This value must be given, either with XYMATRIX or ZNUM.
 *Note that AFNI cannot handle single-slice datasets!

DATUM typ
  Specifies the type of data in the images:
    typ = short   -> 16 bit signed integers [the default]
          float   -> 32 bit IEEE floats
          byte    -> 8 bit unsigned integers
          complex -> 64 bit IEEE complex values (real/imag pairs)
 *This command is not required, as long as the data are really shorts.
  The amount of data read for each image will be determined by this
  command, the XYMATRIX dimensions, and the ACQUISITION_TYPE (whether
  2D or 3D data is being sent).

BYTEORDER order
  This new command string (27 Jun 2003) tells the realtime plugin the
  byte order (endian) that the image data is in.  If the byte order is
  different from that of the machine afni is running on, the realtime
  plugin will perform byte swapping on the images as they are read in.
    order = LSB_FIRST  -> least significant byte first (little endian)
          = MSB_FIRST  -> most significant byte first (big endian)
 *This command is not required.  Without this command, image bytes will
  not be swapped.
 *This command works for DATUM type of short, int, float or complex.

ZORDER arg
  Specifies the order in which the slices will be read.
    arg = alt -> alternating order (e.g., slices are presented
                   to AFNI in order 1 3 5 7 9 2 4 6 8, when nz=9).
        = seq -> sequential order (e.g., slices are presented
                   to AFNI in order 1 2 3 4 5 6 7 8 9, when nz=9).
 *This command is not required, since 'alt' is the default.  It will
  be ignored if a 3D ACQUISITION_TYPE is used.

XYZAXES xcode ycode zcode
  Specifies the orientation of the 3D volume data being sent to AFNI.
  Each of the 3 codes specifies one axis orientation, along which the
  corresponding pixel coordinate increases.  The possible codes are:
    I-S (or IS) -> inferior-to-superior
    S-I (or SI) -> superior-to-inferior
    A-P (or AP) -> anterior-to-posterior
    P-A (or PA) -> posterior-to-anterior
    R-L (or RL) -> right-to-left
    L-R (or LR) -> left-to-right
  For example, "XYZAXES S-I A-P L-R" specifies a sagittal set of slices,
  with the slice acquisition order being left-to-right.  (In this example,
  if ZFIRST is used, the 'd' code in that command must be either 'L' or 'R'.)
  The 3 different axes codes must point in different spatial directions
  (e.g., you can't say "XYZAXES S-I A-P I-S").
 *This command is required, so that AFNI knows the orientation of the
  slices in space.

GRAPH_XRANGE x_range
  Specifies the bounding range of the horizontal axis on the 3D motion
  correction graph window (which is measured in repetitions).  The actual
  range will be [0, x_range].  E.g. "GRAPH_XRANGE 120".
  
GRAPH_YRANGE y_range
  Specifies the bounding range of the vertical axis on the 3D motion
  correction graph window (the units will vary).  The actual range will
  be [-y_range, +y_range].  E.g. "GRAPH_YRANGE 2.3".

  If both GRAPH_XRANGE and GRAPH_YRANGE are given, then no final (scaled)
  motion correction graph will appear.
  
GRAPH_EXPR expression
  Allows the user to replace the 6 default 3D motion correction graphs with a
  single graph, where the 'expression' is evaluated at each step based on the
  6 motion parameters at that step.  The variables 'a' through 'f' are used
  to represent dx, dy, dz, roll, pitch and yaw, respectively.

  E.g. GRAPH_EXPR sqrt((a*a+b*b+c*c+d*d+e*e+f*f)/6)
  
  See '3dcalc -help' for more information on expressions.

  ** Note that spaces should NOT be used in the expression.

NUM_CHAN nc
  Specifies the number of independent image "channels" that will be
  sent to AFNI.  Each channel goes into a separate dataset.  Channel
  images are interleaved; for example, if nc=3, then
    image #1 -> datataset #1
    image #2 -> datataset #2
    image #3 -> datataset #3
    image #4 -> datataset #1
    image #5 -> datataset #2
    et cetera.
  For 2D acquisitions, each slice is one "image" in the list above.
  For 3D acquisitions, each volume is one "image".
  All channels will have the same datum type, the same xyz dimensions,
  and so on.
 * This command is optional, since the default value of nc is 1.

DRIVE_AFNI command
  You can also pass commands to control AFNI (e.g., open windows) in the
  image prolog.  See README.driver for the list of command strings.
  More than one DRIVE_AFNI command can be used in the realtime prolog.
 * This command is optional.

DRIVE_WAIT command
  This command works exacly like DRIVE_AFNI, except that the real-time
  plugin waits for the next complete volume to execute the command.  The
  purpose is to execute the command after the relevant data has arrived.

NOTE text to attach to dataset
  This command lets you attach text notes to the dataset(s) being created
  by the realtime plugin.  All the text after "NOTE ", up to (not including)
  the next '\n', will be attached as a text note.  More than one NOTE can
  be given.  If you want to send a multiline note, then you have to convert
  the '\n' characters in the note text to '\a' or '\f' characters (ASCII
  7 and 12 (decimal), respectively).  Any '\a' or '\f' characters in the
  text will be converted to '\n' characters before the note is processed.

OBLIQUE_XFORM m0 m1 m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 m12 m13 m14 m15
  This command is to send an IJK_TO_DICOM_REAL oblique transformation
  matrix, consisting of 16 floats in row-major order, to be applied to
  all resulting datasets (i.e. stored in the daxes->ijk_to_dicom_real
  structure).


==============================================
How AFNI reads realtime command and image data
==============================================
This stuff is all carried out in the image source program (e.g., Rx_xpi).
Most of the current source code is in file ep_afni.c, for operation at
the MCW Bruker 3 Tesla scanner.  Also see the sample program rtfeedme.c.

Step 1: The image source program opens a TCP/IP socket to the system
        running AFNI, on port 7954 - the realtime AFNI plugin is listening
        there.  AFNI checks if the host that opened the connection is on
        its "trust list".  When this socket is ready then ...

Step 2: The image source program tells AFNI from where it should really
        get its data.  A control string is written to the 7954 socket.
        The first line of this control string specifies whether to use
        a TCP/IP socket for the data channel, or to use shared memory.

        If there is a second line on the control string, then it is the
        name of an "info program" that AFNI should run to get the command
        information described above.  At the Bruker 3 Tesla scanner,
        these commands are generated by the program 3T_toafni.c, which
        runs a script on the 3T60 console computer to get values from
        ParaVision, and then takes that information and formats most of
        the control commands for realtime AFNI.  In the ep_afni.c
        routines, the name of the info program is stored in string variable
        AFNI_infocom, which is initialized in ep_afni.h to be "3T_toafni".

        When AFNI reads the control string from the 7954 socket, it then
        closes down the 7954 socket and opens the data channel (TCP/IP or
        shared memory) that the first line of the control string specified.
        If the second line of the control string specified an info program
        to get the command strings, this program will not be run until the
        first image data arrives at AFNI.

        There are 2 reasons for separating the data channel from the control
        socket.  First, if the image source program is one the same system
        as AFNI, then shared memory can be used for the data channel.
        However, I wanted AFNI to be able to be on a separate system from
        the image source program, so I also wanted to allow for transport of
        image data via a socket.  At the beginning, AFNI doesn't know where
        it will get the data from, so the initial connection must be via a
        socket, but later it might want to switch to shared memory.  Second,
        in principal AFNI could acquire data from more than one image source
        at a time.  This is not yet implemented, but keeping the initial
        control socket separated from the actual data stream makes this a
        possibility.  (The control socket is only used briefly, since only
        a few bytes are transmitted along it.)

Step 3: Once the data channel to AFNI is open, the image source program
        can send image data to AFNI (this is done in AFNI_send_image()
        in ep_afni.c).  Before the first image is sent, there must be
        at least one AFNI command string sent along the data channel.
        In the way I've set up ep_afni.c for the Bruker, two commands
        are actually sent here just before the first image:
           DATUM short
           XYMATRIX nx ny
        All the rest of the commands come from 3T_toafni.  The reason
        for this separation is that 3T_toafni doesn't actually know how
        the user chose to reconstruct the images (e.g., 64x64 acquisition
        could be reconstructed to 128x128 image).  The information given
        here is the minimal amount needed for AFNI to compute how many
        bytes in the data channel go with each image.  This MUST be
        present here so that AFNI can read and buffer image data from
        the data channel.

        If the image source program knows ALL the information that AFNI
        needs, then there is no need for the info program.  In such a
        case, all the command strings for AFNI can be collected into
        one big string (with '\n' line separators and the usual '\0'
        terminator) and sent to AFNI just before the first image data.

        This "Do it all at once" approach (much simpler than using an
        info program to get the command strings) would require some
        small changes to routine AFNI_send_image() in ep_afni.c.

        "Do it all at once" is the approach taken by the realtime
        simulation program rtfeedme.c, which will take an AFNI dataset
        apart and transmit it to the realtime plugin.

        If the "Do it all at once" option is not practical, then an
        alternative info program to 3T_toafni must be developed for each
        new scanner+computer setup.  Note that the info program writes its
        output command strings to stdout, which will be captured by AFNI.

        After the initial command information is sent down the data
        channel, everthing that follows down the data channel must be
        raw image information - no more commands and no headers.  For
        example, if you have 64x64 images of shorts, then each set of
        8192 bytes (after the terminal '\0' of the initial command
        string) is taken as an image.

        If an info program was specified on the 7954 socket, then
        it will be run by AFNI (in a forked sub-process) at this time.
        Until it completes, AFNI will just buffer the image data it
        receives, since it doesn't know how to assemble the images into
        3D volumes (e.g., it doesn't know the number of slices).

        When the data channel connection is closed (usually because the
        image source program exits), then AFNI will write the new dataset
        to disk.  This is why there is no command to AFNI to tell it how
        many volumes to acquire - it will just add them to the dataset
        until there is no more data. AFNI will then start to listen on the
        TCP/IP 7954 port for another control connection, so it can acquire
        another dataset.

======================
Hosts that AFNI trusts
======================
AFNI checks the incoming IP address of socket connections to see if the
host is on the "trust list".  The default trust list is

    141.106.106  = any MCW Biophysics computer (we're very trustworthy)
    127.0.0.1    = localhost
    192.168      = private class B networks (this is a reserved set of
                   addresses that should not be visible to the Internet)

You can add to this list by defining the environment variable as in the
example below (before starting AFNI):

    setenv AFNI_TRUSTHOST 123.45.67

This means that any IP address starting with the above string will be
acceptable.  If you want to add more than one possibility, then you can
also use environment variables AFNI_TRUSTHOST_1, AFNI_TRUSTHOST_2, up to
AFNI_TRUSTHOST_99.  (That should be enough - how trusting do you really
want to be?)  If you want to remove the builtin trust for MCW Biophysics,
you'll have to edit file thd_trusthost.c.

You cannot use hostnames for this purpose - only actual IP addresses in
the dotted form, as shown above.  (What I'll do when IPv6 becomes widely
used, I don't know.  Yet.)



AFNI file: README.registration
====================================================
Notes on Image and Volume Registration in AFNI 2.21+
====================================================
Two basic methods are supplied.  The first does 2D (in-plane) alignment
on each slice separately.  There is no attempt to correct for out-of-slice
movements.  The second does 3D (volumetric) alignment on each 3D sub-brick
in a dataset.  Both methods compute the alignment parameters by an iterative
weighted least squares fit to a base image or volume (which can be selected
from another dataset).  The AFNI package registration programs are designed
to find movements that are small -- 1-2 voxels and 1-2 degrees, at most.
They may not work well at realigning datasets with larger motion (as would
occur between imaging sessions) -- however, this issue is discussed later.

2D registration is implemented in programs
 * imreg:      operates on slice data files, outside of the AFNI framework
 * 2dImReg:    same as imreg, but takes data from an AFNI dataset
 * plug_imreg: same as 2dImReg, but interactively within AFNI

3D registration is implemented in programs
 * 3dvolreg:    operates on 3D+time datasets
 * plug_volreg: same as 3dvolreg, but interactively within AFNI

2D image rotation/translation can be done with program imrotate.  3D and
3D+time AFNI dataset rotation/translation can be done with program 3drotate.

Each realignment method has its good and bad points.  The bad point about
2D registration is the obvious lack of correction for out-of-slice movement.
The bad point about 3D registration is that there is no ability to compensate
for movements that occur during the time that the volume is acquired --
usually several seconds.  A better approach would be to merge the two
methods.  This may be done in the future, but is not available now.

Several data resampling schemes are implemented in the registration
programs.  Generally, the most accurate resampling is obtained with
the Fourier method, but this is also the slowest.  A polynomial
interpolation method can be used instead if speed is vital.  The
registration and rotation routines in 3dvolreg (and plug_volreg)
have been carefully written for efficiency.  As a result, 3dvolreg
is several times faster than AIR 3.08 (available from Roger Woods
at http://bishopw.loni.ucla.edu/AIR3/index.html ).  Using Fourier
interpolation in 3dvolreg and trilinear interpolation in AIR, 3dvolreg
was 2-3 times faster on some typical FMRI datasets (128x128x30x80).
Dropping to 7th order (heptic) polynomial interpolation speeds up
3dvolreg by another factor of 2.  The two programs (AIR and 3dvolreg)
produce nearly identical estimates of the movement parameters.

-----------------------------------
Robert W. Cox, PhD -- November 1998
Medical College of Wisconsin
-----------------------------------

The following words can be used as the basis for a concise description of
the registration algorithm, if you need such a thing for a paper.  A paper
on the algorithm has been published:
     RW Cox and A Jesmanowicz.
     Real-time 3D image registration for functional MRI.
     Magnetic Resonance in Medicine, 42:1014-1018, 1999.
------------------------------------------------------------------------------
The algorithm used for 3D volume registration is designed to be efficient
at fixing motions of a few mm and rotations of a few degrees.  Using this
limitation, the basic technique is to align each volume in a time series
to a fiducial volume (usually an early volume from the first imaging run
in the scanning session).  The fiducial volume is expanded in a 1st order
Taylor series at each point in the six motion parameters (3 shifts, 3 angles).
This expansion is used to compute an approximation to a weighted linear
least squares fit of the target volume to the fiducial volume.  The target
volume is then moved according to the fit, and the new target volume
is re-fit to the fiducial.  This iteration proceeds until the movement
is small.  Effectively, this is gradient descent in the nonlinear least
squares estimation of the movement parameters that best make the target
volume fit the fiducial volume.  This iteration is rapid (usually only
2-4 iterations are needed), since the motion parameters are small.  It is
efficient, based on a new method using a 4-way 3D shear matrix factorization
of the rotation matrix.  It is accurate, since Fourier interpolation is used
in the resampling process.  On the SGI and Intel workstations used for this
project, a 64x64x16 volume can be aligned to a fiducial in less than 1 second.
------------------------------------------------------------------------------

===============================================================================
Using 3dvolreg/3drotate to Align Intrasubject/Intersession Datasets: AFNI 2.29+
===============================================================================
When you study the same subject on different days, to compare the datasets
gathered in different sessions, it is first necessary to align the volume
images.  If you do not want to do this in the +acpc or +tlrc coordinate
systems (which may not be accurate enough), then you need to use 3dvolreg
to compute and apply the correct rotation+shift to register the datasets.
This note discusses the practical difficulties posed by this problem, and
the AFNI solution.

----------------------
The Discursive Section
----------------------
The difficulties include:
 (A) Subject's head will be positioned differently in the scanner -- both
     in location and orientation.
 (B) Low resolution, low contrast echo-planar images are harder to realign
     accurately than high resolution, high contrast SPGR images, when the
     subject's head is rotated.
 (C) Anatomical coverage of the EPI slices will be different, meaning that
     exact overlap of the functional data from two sessions may not be
     possible.
 (D) The geometrical relationship between the EPI and SPGR (MPRAGE, etc.)
     images may be different on different days.
 (E) The coordinates in the scanner used for the two scanning sessions
     may be different (e.g., slice coverage from 40I to 50S on one day,
     and from 30I to 60S on another), even if the anatomical coverage
     is the same.
 (F) The resolution (in-plane and/or slice thickness) may vary between
     scanning sessions.

(B-D) imply that simply using 3dvolreg to align the EPI data from session 2
with EPI data from session 1 won't work well.  3dvolreg's calculations are
based on matching voxel data, but if the images don't cover the same
part of the brain fully, they won't register well.

** Note well: 3dvolreg cannot deal with problem (F) -- if you want to **
**            compare data on different days, be sure to use the same **
**            image acquisition parameters! [See 3dZregrid below.]    **

The AFNI solution is to register the SPGR images from session 2 to session 1,
to use this transformation to move the EPI data (or functional datasets
derived from the EPI data) from session 2 in the same way.  The use of the
SPGR images as the "parents" gets around difficulty (B), and is consistent
with the extant AFNI processing philosophy.  The SPGR alignment procedure
specifically ignores the data at the edges of the bricks, so that small (5%)
mismatches in anatomical coverage shouldn't be important.  (This also helps
eliminate problems with various unpleasant artifacts that occur at the edges
of images.)

Problem (C) is addressed by zero-padding the EPI datasets in the slice-
direction.  In this way, if the EPI data from session 2 covers a somewhat
different patch of brain than from session 1, the bricks can still be made
to overlap, as long as the zero-padding is large enough to accomodate the
required data shifts.  Zero-padding can be done in one of 3 ways:
 (1) At dataset assembly time, in to3d (using the -zpad option); or
 (2) At any later time, using the program 3dZeropad; or
 (3) By 3drotate (using -gridparent with a previously zero-padded dataset).

Suppose that you have the following 4 datasets:
  S1 = SPGR from session 1    F1 = functional dataset from session 1
  S2 = SPGR from session 2    F2 = functional dataset from session 2

Then the following commands will create datasets registered from session 2
into alignment with session 1:

  3dvolreg -twopass -twodup -clipit -base S1+orig -prefix S2reg S2+orig

  3drotate -clipit -rotparent S2reg+orig -gridparent F1+orig \
           -prefix F2reg F2+orig

The first command writes the rotation+shift transformation use to align
S2 with S1 into the header of S2reg.  The "-rotparent" option in the
second command tells 3drotate to take the transformation from the
.HEAD file of S2reg, rather than from the command line.  The "-gridparent"
option tells the program to make sure the output dataset (F2reg) is in the
same geometrical relationship to S1 as dataset F1.

When you are creating EPI datasets, you may want to use the -zpad option
to to3d, so that they have some buffer space on either side to allow for
mismatches in anatomical coverage in the slice direction.  Note that
the use of the "-gridparent" option to 3drotate implies that the output
dataset F2reg will be sampled to the same grid as dataset F1.  If needed,
F2reg will be zeropadded in the slice-direction to make it have the same
size as F1.

If you want to zeropad a dataset after creation, this can be done using
a command line like:

  3dZeropad -z 2 -prefix F1pad F1+orig

which will add 2 slices of zeros to each slice-direction face of each
sub-brick of dataset F1, and write the results to dataset F1pad.

The above 3dvolreg+3drotate combination is reasonable for rotating functional
datasets derived from EPI time series in session 2 to be aligned with data
from session 1.  If you want to align the actual EPI time series between
sessions, the technique above requires two interpolation steps on the EPI
data.  This is because you want to register all the session 2 EPI data
together internally, and then later rotate+shift these registered datasets
to be aligned with session 1.

In general, it is bad to interpolate data twice, since each interpolation
step corrupts the data a little.  (One visible manifestation of this effect
is image blurring.)  To avoid this problem, program 3dvolreg also can use the
"-rotparent -gridparent" options to specify the transform to the final output
coordinate system.  When these options are used, the EPI time series is
registered internally as usual, but after each sub-brick has its own
registration transformation computed, the extra transformation (from the
-rotparent dataset) that aligns to session 1 is multiplied in.  This means
that the final output of such a 3dvolreg run will be directly realigned to
the session 1 coordinate system.  For example:

  3dvolreg -twopass -twodup -clipit -base S1+orig -prefix S2reg S2+orig

  3dvolreg -clipit -base 4 -prefix E1reg E1+orig

  3dvolreg -clipit -rotparent S2reg+orig -gridparent E1reg+orig \
           -base 4 -prefix E2reg E2+orig

The first command is exactly as before, and provides the anatomical transform
from session 2 to session 1.  The second command is for registering the sub-
bricks from session 1's EPI scans.  The third command is for registering the
sub-bricks from session 2's EPI scans, and simultaneously transforming them
to session 1's frame of reference.  After this is done, the functional
activation program of your choice could be applied to E1reg and E2reg (etc.).

Which is better: to analyze each session and then rotate the derived
functional maps to the master session, OR to rotate the EPI time series to
the master session, and then analyze?  There is no good answer to this
question, because there are good points and bad points to each method.

------------------------------------------------------------------------------
Analyze then Rotate                   | Rotate then Analyze
------------------------------------- | --------------------------------------
GOOD: the time-offsets of each slice  | BAD: large inter-session out-of-slice
      are still accurate after small  |      rotations will make the concept
      intra-session out-of-slice      |      of slicewise time-offsets useless
      rotations                       |
BAD: rotating statistical maps (SPMs) | GOOD: EPI values are linear (about) in
     requires interpolating values    |       the raw MRI data; interpolating
     that are not linearly dependent  |       them (linear combinations) is
     on the data                      |       perfectly reasonable
------------------------------------------------------------------------------

[No doubt I'll think of more good/bad tradeoffs someday.]

A third method is to time shift all 3D+time datasets to the same origin, prior
to registration.  This has the drawback that it deals with aliased higher
frequency signals (e.g., the heartbeat) improperly.  It has the positive feature
that it eliminates the annoying time-offsets as soon as possible, so you don't
have to think about them any more.

------------------------------------------------------------------------
Dealing with Variable Slice Thicknesses in Different Sessions: 3dZregrid
------------------------------------------------------------------------
When comparing data from different sessions, it would be best to gather these
data in the same fashion on each day, insofar as practicable.  The difficulty
of getting the subject's head in the same orientation/position is what these
notes are all about.  It isn't difficult to make sure that the slice thickness
is the same on each day.  However, it may occasionally happen that your SPGR
(or other anatomical) datasets will have slightly different slice thicknesses.
3dvolreg will NOT accept base and input datasets that don't have the same
grid spacings in all 3 dimensions.

So what to do?  (Dramatic pause here.)  The answer is program 3dZregrid.
It can resample -- interpolate -- a dataset to a new slice thickness in the
z-direction ONLY.  For example, suppose that on day 1 the SPGR for subject
Elvis had slice thickness 1.2 mm and on day 2 you accidentally used 1.3 mm.
Then this command would fail:

 3dvolreg -twopass -twodup -clipit -base Elvis1+orig \
          -prefix Elvis2reg Elvis2+orig

with a rather snide message like the following:

** Input Elvis3+orig.HEAD and base Elvis1+orig.HEAD don't have same grid spacing!
   Input: dx= 0.938  dy=-0.938  dz=-1.300
   Base:  dx= 0.938  dy=-0.938  dz=-1.200
** FATAL ERROR: perhaps you could make your datasets match?

In this case, you should do the following:

  3dZregrid -dz 1.2 -prefix Elvis2ZZ Elvis2+orig
  3dvolreg -twopass -twodup -clipit -base Elvis1+orig \
           -prefix Elvis2reg Elvis2ZZ+orig

The intermediate dataset (Elvis2ZZ+orig) will be linearly interpolated in
the slice (z) direction to 1.2 mm.  The same number of slices will be used
in the output dataset as are in the input dataset, which means that the output
dataset will be slightly thinner.  In this case, that is good, since the
Elvis1+orig dataset actually covers a smaller volume than the Elvis2+orig
dataset.

In principle, you could use 3dZregrid to help compare/combine functional
datasets that were acquired with different slice thicknesses.  However, I
do NOT recommend this.  There has been little or no research on this kind
of operation, and the meaningfulness of the results would be open to
serious question.  (Not that this will stop some people, of course.)

-------------------------------
Summary of Tools and Techniques
-------------------------------
(1) Zero pad the functional data before doing inter-session rotations.  This
    will allow for imperfect overlap in the acquisitions of the EPI slices.
    At dataset assembly time, you can zero pad with

      to3d -zpad 2 ....

    which will insert 2 slices of zeros at each slice-direction face of the
    dataset.  If you use this method for zero padding, note the following:
    * If the geometry parent dataset was created with -zpad, the spatial
        location (origin) of the slices is set using the geometry dataset's
        origin BEFORE the padding slices were added.  This is correct, since
        you need to set the origin/geometry on the current dataset as if the
        padding slices were not present.  To3d will adjust the origin of the
        output dataset so that the actual data slices appear in the correct
        location (it uses the same function that 3dZeropad does).
    * The zero slices will NOT be visible in the image viewer in to3d, but
        will be visible when you use AFNI to look at the dataset.
    * Unlike the '-zpad' option to 3drotate and 3dvolreg, this adds slices
      only in the z-direction.
    * You can set the environment variable 'AFNI_TO3D_ZPAD' to provide a
      default for this option.
    * You can pad in millimeters instead of slices by appending 'mm' to the
      the -zpad parameter: '-zpad 6mm' will add as many slices as necessary
      to get at least 6 mm of padding.  For example, if the slice thickness
      were 2.5 mm, then this would be equivalent to '-zpad 3'.  You could
      also use this in 'setenv AFNI_TO3D_ZPAD 6mm'.

    You can also zeropad datasets after they are created using

      3dZeropad -z 2 -prefix ElvisZZ Elvis+orig

    This creates a new dataset (here, named ElvisZZ+orig) with the extra 4
    slices (2 on each slice-direction side) added.  When this is done, the
    origin of the new dataset is adjusted so that the original part of the
    data is still in the same spatial (xyz-coordinate) location as it was
    before -- in this way, it will still overlap with the SPGRs properly
    (assuming it overlapped properly before zero-padding).

    If you want to specify padding in mm with 3dZeropad, you don't put the
    'mm' suffix on the slice count; instead, you use the '-mm' flag, as in

      3dZeropad -mm -z 6 -prefix ElvisMM Elvis+orig

    (The reason for this annoying changing from to3d's method is that
    3dZeropad can also do asymettric padding on all faces, and I didn't
    want to deal with the annoying user who would specify some faces in mm
    and some in slices.)

    For the anatomical images I am used to dealing with (whole-head SPGRs
    and MPRAGEs), there is no real reason to zeropad the dataset -- the
    brain coverage is usually complete, so realignment between sessions
    should not lose data.  There might be situations where this advice
    is incorrect; in particular, if the anatomical reference images do
    NOT cover the entire head.

(2) Choose one session as the "master" and register all the anatomicals
    from other sessions to the master anatomical.  For example

      3dvolreg -clipit -twopass -twodup -zpad 4 -rotcom -verbose  \
               -base ANAT001+orig -prefix ANAT002reg ANAT002+orig

    where I'm assuming datasets labeled "001" are from the master session
    and those labeled "002" are from another session.  Some points to mull:

    * If necessary, use 3dZregrid to adjust all anatomical datasets to
        have the same slice thickness as the master session, prior to
        using 3dvolreg.
    * The -zpad option here just pads the 3D volumes with zeros (4 planes on
        all 6 sides) during the rotation process, and strips those planes
        off after rotation.  This helps minimize some artifacts from the
        shearing algorithm used for rotation.
    * If you are using a local gradient coil for image acquisition, the
        images may be slightly distorted at their inferior edges.  This
        is because the magnetic gradient fields are not perfectly linear
        at the edges of the coil.  When the SPGRs from different sessions
        are aligned, you may see small distortions at the base of the brain
        even though the rest of the volume appears well-registered.  This
        occurs because the subject's head is placed differently between
        sessions, and so the gradient coil distortions are in different
        anatomical locations.  Flipping between the SPGRs from the two
        sessions make the distortions quite obvious, even if they are
        imperceptible in any single image.  Registration by itself cannot
        correct for this effect.  (Sorry, MCW and MAI.)
    * The -rotcom option prints out the rotation/translation used.  This
        is for informational purposes only -- you don't need to save this.
        In fact, it is now saved in the header of the output dataset, and
        could be retrieved with the command

          3dAttribute VOLREG_ROTCOM_000000 ANAT002reg+orig

        The list of all the 3dvolreg-generated dataset attributes is given
        later in this document.

(3) Register all the EPI time series within the session and also apply the
    transformation to take the data to the master session reference system.
    For example

      3dvolreg -clipit -zpad 4 -verbose                                   \
               -rotparent ANAT002reg+orig -gridparent FUNC001_001reg+orig \
               -base 'FUNC002_001+orig[4]'                                \
               -prefix FUNC002_007reg FUNC002_007+orig

    where FUNCsss_nnn is the nnn-th EPI time series from the sss-th session;
    and the base volume for each session is taken as the #4 sub-brick from
    the first EPI time series.  Some points to ponder:

    * If you didn't do it before (step 1), you probably should zeropad
        FUNC001_001+orig or FUNC001_001reg+orig before doing the command
        above.  If you failed to zeropad dataset FUNC002_007+orig, it will
        be zeropadded during the 3dvolreg run to match the -gridparent.
    * I recommend the use of -verbose with inter-session registration, so
        that you can see what is going on.
    * After the EPI time series are all registered to the master session,
        the activation analysis fun can now begin!
    * The slice time-offsets in FUNC002_007reg will be adjusted to allow
        for dataset shifts in the slice-direction from FUNC002_007+orig to
        FUNC001_001reg+orig.  If you use the -verbose option and 3dvolreg
        decides this is needed, it will print out the amount of shift
        (always an integer number of slices).
    * However, if there is any significant rotation between the sessions,
        the whole concept of voxel time shifts (slicewise or otherwise)
        becomes meaningless, since the data from different time-offsets
        will be mixed up by the inter-slice interpolation.  If preserving
        this time information is important in your analysis, you probably
        need to analyze the data from each session BEFORE aligning to
        the master session.  After the analysis, 3drotate can be used with
        -rotparent/-gridparent (as outlined earlier) to transform the
        functional maps to the master session brain alignment.
    * An alternative would be to use 3dTshift on the EPI time series, to
        interpolate the slices to the same time origin.  Then registration
        and intersession alignment could proceed.  You can also do this
        during the 3dvolreg run by adding the switch '-tshift ii' to the
        3dvolreg command line (before the input file).  Here, 'ii' is the
        number of time points to ignore at the start of the time series
        file -- you don't want to interpolate in time using the non-T1
        equilibrated images at the beginning of the run:

        3dTshift -ignore 4 -prefix FUNC002_base FUNC002_001+orig

        3dvolreg -clipit -zpad 4 -verbose -tshift 4                         \
                 -rotparent ANAT002reg+orig -gridparent FUNC001_001reg+orig \
                 -base 'FUNC002_base+orig[4]'                               \
                 -prefix FUNC002_007reg FUNC002_007+orig

        In this example, the first 4 time points of FUNC002_007+orig are
        ignored during the time shifting.  Notice that I prepared a temporary
        dataset (FUNC002_base) to act as the registration base, using 3dTshift.
        This is desirable, since the FUNC002_007 bricks will be time shifted
        prior to registration with the base brick.  Since the base brick is NOT
        from FUNC002_007, it should be time shifted in the same way.  (After
        FUNC002_base has been used, it can be discarded.)

    * The FUNC datasets from session 001 don't need (or want) the -rotparent,
      -gridparent options, and would be registered with some command like

        3dvolreg -clipit -zpad 4                          \
                 -base 'FUNC001_001+orig[4]'              \
                 -prefix FUNC001_007reg FUNC001_007+orig

-------------------------------------
Apologia and Philosophical Maundering
-------------------------------------
I'm sorry this seems so complicated.  It is another example of the intricacy
of FMRI data and analysis -- there is more than one reasonable way to proceed.

-----------------------------------
Robert W Cox - 14 Feb 2001
National Institute of Mental Health
rwcox@nih.gov
-----------------------------------

====================================================================
Registration Information Stored in Output Dataset Header by 3dvolreg
====================================================================
The following attributes are stored in the header of the new dataset.
Note that the ROTCOM and MATVEC values do NOT include the effects of
any -rotparent transformation that is multiplied in after the internal
realignment transformation is computed.

VOLREG_ROTCOM_NUM    = number of sub-bricks registered
 (1 int)               [may differ from number of sub-bricks in dataset]
                       [if "3dTcat -glueto" is used later to add images]

VOLREG_ROTCOM_xxxxxx = the string that would be input to 3drotate to
 (string)              describe the operation, as in
                   -rotate 1.000I 2.000R 3.000A -ashift 0.100S 0.200L 0.300P
                       [xxxxxx = printf("%06d",n); n=0 to ROTCOM_NUM-1]

VOLREG_MATVEC_xxxxxx = the 3x3 matrix and 3-vector of the transformation
 (12 floats)           generated by the above 3drotate parameters; if
                       U is the matrix and v the vector, then they are
                       stored in the order
                           u11 u12 u13 v1
                           u21 u22 u23 v2
                           u31 u32 u33 v3
                       If extracted from the header and stored in a file
                       in just this way (3 rows of 4 numbers), then that
                       file can be used as input to "3drotate -matvec_dicom"
                       to specify the rotation/translation.

VOLREG_CENTER_OLD    = Dicom order coordinates of the center of the input
 (3 floats)            dataset (about which the rotation takes place).

VOLREG_CENTER_BASE   = Dicom order coordinates of the center of the base
 (3 floats)            dataset.

VOLREG_BASE_IDCODE   = Dataset idcode for base dataset.
 (string)

VOLREG_BASE_NAME     = Dataset .HEAD filename for base dataset.
 (string)

These attributes can be extracted in a shell script using the program
3dAttribute, as in the csh example:

  set rcom = `3dAttribute VOLREG_ROTCOM_000000 Xreg+orig`
  3drotate $rcom -heptic -clipit -prefix Yreg Y+orig

which would apply the same rotation/translation to dataset Y+orig as was
used to produce sub-brick #0 of dataset Xreg+orig.

To see all these attributes, one could execute

  3dAttribute -all Xreg+orig | grep VOLREG
==============================================================================

==============================
EXAMPLE and NOTES by Ziad Saad
==============================
This is an example illustrating how to bring data sets from multiple sessions
on the same subject in alignment with each other.  This is meant to be a
complement to Bob Cox's notes (above) on the subject. The script @CommandGlobb
is supplied with the AFNI distributions and is used to execute an AFNI command
line program on multiple files automatically.

The master SPGR is S1+orig and the new SPGR S2+orig.  Both should have
the same resolution.

Step #1: Align S2 to S1
-----------------------
    3dvolreg -clipit -twopass -twodup -zpad 8 -rotcom -verbose \
             -base S1+orig -prefix S2_alndS1 S2+orig >>& AlignLog

    # (the file AlignLog will contain all the output of 3dvolreg)


In the next step, we will rotate the EPI data sets from the new session
(E2_*) to bring them into alignment with an EPI data sets from the master
session (E1_1).  All of E2_* and E1_* have the same resolution.

Step #2: Inter-session registration of E1_*
-------------------------------------------
Because we will be combining EPI time series from different sessions, it is
best to remove slice timing offsets from the EPI time series.  Time series
offsets are defined on a slice by slice basis and become meaningless when
the slices are shifted around and rotated. Time Shifting (TS) can be applied
by 3dvolreg, however since TS occurs prior to registration, you should use a
base with Time Shifted time series.

    #create Time Shifted Base
    3dTshift -ignore 0 -prefix E1_1-TSbase E1_1+orig

    #inter-session registration of E1_*
    @CommandGlobb -com '3dvolreg -Fourier -tshift 0 -base E1_1-TSbase+orig[100]' \
    -newxt vr -list E1_*+orig.HEAD

Note that we used the [100] sub-brick of the time-shifted E1_1-TSbase as the
base for registration.  In our practice, this is the sub-brick that is closest
in time to the SPGR acquisition, which we do at the end of the imaging session.
If you do your SPGR (MP-RAGE, ...) at the start of the imaging session, it
would make more sense to use the [4] sub-brick of the first EPI dataset as
the EPI registration base for that session ([4] to allow for equilibration
of the longitudinal magnetization).

Step #3: Padding the master session EPI datasets
------------------------------------------------
Pad the master echo planar data (E1_*) to ensure that you have a large enough
spatial coverage to encompass E2_* (and E3_* E4_* ....).  You do not have to
do this but all of E2_*, E3_*, etc  will be cropped (or padded) to match E1_*.
You may choose to restrict the volume analyzed to the one common to all of the
E* data sets but that can be done using masks at a later stage. Here, we'll pad
with 4 slices on either side of the volume.

    @CommandGlobb -com '3dZeropad -z 4' -newxt _zpd4 -list E1_*vr+orig.BRIK

Step #4: Register E2_* to E1_*
------------------------------
Note that E2_* inter-scan motion correction will be done simultaneously with
the intra-scan registration.

    #create a time shifted base echo planar data set (for inter-scan registration)
    3dTshift -ignore 0 -prefix E2_1-TSbase E2_1+orig

    #perform intra and inter - scan registration
    #[NOTE: the '3dvolreg ...' command must all be on one line -- it ]
    #[      is only broken up here to make printing this file simpler]
    @CommandGlobb -com \
     '3dvolreg -clipit -zpad 4 -verbose -tshift 0 -rotparent S2_alndS1+orig
      -gridparent E1_1vr_zpd4+orig -base E2_1-TSbase+orig[100]' \
      -newxt vr_alndS1 -list E2_*+orig.HEAD

-----------------------------------------
Ziad Saad, FIM/LBC/NIMH/NIH, Feb 27, 2001
ziad@nih.gov
-----------------------------------------



AFNI file: README.render_scripts
Format of AFNI Rendering Scripts
================================
This document assumes you are familiar with operation of the AFNI Volume
Rendering plugin (plug_render.c).

By examining the output of one of the "Scripts->Save" buttons, you can
probably guess most of the format of a .rset file.  Each rendering frame
starts with the string "***RENDER", and then is followed by a list of
variable assignments.  Each variable assignment should be on a separate
line, and the blanks around the "=" signs are mandatory.

Please note well that you cannot use the Automate expression feature in a
.rset file: the right hand side of each assignment must be a number, or
a symbolic name (as for the cutout types, infra).  You also cannot use the
"Expr > 0" cutout type in a .rset file, since that requires a symbolic
expression on the RHS of the assignment, and the .rset I/O routines aren't
programmed to handle this special case.

When a .rset file is written out, the first ***RENDER frame contains
definitions of all the rendering state variables.  Succeeding frames
only define variables that change from the previous frame.  Comments
may be included using the C++ notation "//" (=comment to end of line).

At the present time, the dataset name (_name) variables are not used by
the renderer.  Some other variables are only used if certain toggles on
the "Scripts" menu are activated:
  * The sub-brick index variables (*_ival) are used only if the
      "Brick Index?" toggle is activated.
  * The brightness (bright_) variables and opacity (opacity_)
      variables are used only if the "Alter Grafs?" toggle is activated.
  * The dataset ID codes (_idc variables) are only used if the
      "Alter Dsets?" toggle is activated.

The section below is a .rset file to which I have added comments in order
to indicate the function of each variable.

Bob Cox -- July 01999
        -- updated April 02000
==============================================================================

***RENDER                                           // starts a frame
  dset_name = /usr3/cox/verbal/strip+tlrc.HEAD      // not used now
  func_dset_name = /usr3/cox/verbal/func+tlrc.HEAD  // not used now
  dset_idc = MCW_OYJRIKDHKMV                        // used by "Alter Dsets?"
  func_dset_idc = MCW_PTEAZEWVTIG                   // used by "Alter Dsets?"
  dset_ival = 0                                     // sub-brick of underlay
  func_color_ival = 0                               // sub-brick of overlay
  func_thresh_ival = 1                              // sub-brick of overlay
  clipbot = 0                                       // underlay clipping
  cliptop = 128                                     //   ranges
  angle_roll =  55                                  // viewing angles in
  angle_pitch =  120                                //   degrees
  angle_yaw =  0
  xhair_flag = 0                                    // 1 = show crosshairs
  func_use_autorange = 1                            // Autorange button
  func_threshold =  0.5                             // between 0 and 1
  func_thresh_top =  1                              // 1, 10, 1000, or 10000
  func_color_opacity =  0.5                         // between 0 and 1
  func_see_overlay = 0                              // 1 = show color
  func_cut_overlay = 0                              // 1 = cut overlay
  func_kill_clusters = 0                            // 1 = kill clusters
  func_clusters_rmm =  1                            // rmm parameter in mm
  func_clusters_vmul =  200                         // vmul parameter in mm**3
  func_range =  10000                               // used if autorange = 0
 // new pbar values
  pbar_mode  = 0                                    // 1 = positive only
  pbar_npane = 9                                    // number of color panes
  pbar_pval[0] =  1                                 // inter-pane thresholds
  pbar_pval[1] =  0.75
  pbar_pval[2] =  0.5
  pbar_pval[3] =  0.25
  pbar_pval[4] =  0.05
  pbar_pval[5] = -0.05
  pbar_pval[6] = -0.25
  pbar_pval[7] = -0.5
  pbar_pval[8] = -0.75
  pbar_pval[9] = -1
  opacity_scale =  1
 // new cutout values
  cutout_num   = 3                                  // from 0 to 9
  cutout_logic = AND                                // could be OR
  cutout_type[0]   = CUT_ANTERIOR_TO
  cutout_mustdo[0] = NO
  cutout_param[0] =  0
  cutout_type[1]   = CUT_RIGHT_OF
  cutout_mustdo[1] = NO
  cutout_param[1] =  0
  cutout_type[2]   = CUT_SUPERIOR_TO
  cutout_mustdo[2] = YES
  cutout_param[2] =  30
 // new bright graf values - used by "Alter Grafs?"
  bright_nhands = 4                                 // number of graph handles
  bright_spline = 0                                 // 1 = spline interpolation
  bright_handx[0] = 0                               // (x,y) coordinates of
  bright_handy[0] = 0                               //   handle positions
  bright_handx[1] = 38
  bright_handy[1] = 0
  bright_handx[2] = 204
  bright_handy[2] = 247
  bright_handx[3] = 255
  bright_handy[3] = 255
 // new opacity graf values - used by "Alter Grafs?"
  opacity_nhands = 4
  opacity_spline = 0
  opacity_handx[0] = 0
  opacity_handy[0] = 0
  opacity_handx[1] = 42
  opacity_handy[1] = 0
  opacity_handx[2] = 192
  opacity_handy[2] = 192
  opacity_handx[3] = 255
  opacity_handy[3] = 255


***RENDER                                           // starts next frame
  angle_roll =  70                                  // changed roll angle
 // new cutout values
  cutout_num   = 0                                  // changed cutouts
  cutout_logic = OR


***RENDER                                           // starts next frame
  cliptop = 90                                      // changed underlay clip
  angle_roll =  55                                  // changed roll angle
 // new cutout values
  cutout_num   = 3                                  // changed cutouts
  cutout_logic = AND
  cutout_type[0]   = CUT_ANTERIOR_TO
  cutout_mustdo[0] = NO
  cutout_param[0]  =  0
  cutout_type[1]   = CUT_RIGHT_OF
  cutout_mustdo[1] = NO
  cutout_param[1]  =  0
  cutout_type[2]   = CUT_SUPERIOR_TO
  cutout_mustdo[2] = YES
  cutout_param[2] =  30

// end-of-file means no more frames
==========================================================================
The name codes to use for the "cutout_type" variables are

   name code in .rset    menu label in AFNI
   ------------------    ------------------
   CUT_NONE           =  No Cut        // doesn't do much
   CUT_RIGHT_OF       =  Right of      // the rectangular cuts
   CUT_LEFT_OF        =  Left of
   CUT_ANTERIOR_TO    =  Anterior to
   CUT_POSTERIOR_TO   =  Posterior to
   CUT_INFERIOR_TO    =  Inferior to
   CUT_SUPERIOR_TO    =  Superior to
   CUT_SLANT_XPY_GT   =  Behind AL-PR  // the diagonal cuts
   CUT_SLANT_XPY_LT   =  Front AL-PR
   CUT_SLANT_XMY_GT   =  Front AR-PL
   CUT_SLANT_XMY_LT   =  Behind AR-PL
   CUT_SLANT_YPZ_GT   =  Above AS-PI
   CUT_SLANT_YPZ_LT   =  Below AS-PI
   CUT_SLANT_YMZ_GT   =  Below AI-PS
   CUT_SLANT_YMZ_LT   =  Above AI-PS
   CUT_SLANT_XPZ_GT   =  Above RS-LI
   CUT_SLANT_XPZ_LT   =  Below RS-LI
   CUT_SLANT_XMZ_GT   =  Below RI-LS
   CUT_SLANT_XMZ_LT   =  Above RI-LS
   CUT_EXPRESSION     =  Expr > 0      // don't use this in a .rset file!
   CUT_TT_ELLIPSOID   =  TT Ellipsoid  // pretty useless
   CUT_NONOVERLAY     =  NonOverlay++  // mildly useless



AFNI file: README.roi
Region-of-Interests (ROIs) in AFNI 2.20
---------------------------------------
A few tools for selecting voxel subsets and extracting their data for
external analysis are included with AFNI 2.20.  These tools are quite
new and crude, and (God willing) will be improved as time goes on.
Nonetheless, it is possible to do some useful work with them now.

The ROI stuff is mostly implemented as a set of plugins.  These all have
extensive help, so I won't give all the details here.  You may need to
write some C programs to calculate useful results after extracting the
data you want.

Selecting an ROI: plugin "Draw Dataset" [author: RW Cox]
--------------------------------------------------------
This plugin lets you draw values into a dataset brick.  The idea is to
start with a dataset that is all zeros and then draw nonzero values over
the desired regions.  An all zero dataset of a size equivalent to an
existing dataset can be created using the "Dataset Copy" plugin.
Another way to create a starting point for a mask dataset would be
to use the "Edit Dataset" plugin or the "3dmerge" program (e.g., to
pick out all voxels with a correlation coefficient above a threshold).

Normally, you would create the mask dataset as type "fim".  This would
allow it to be displayed as a functional overlay on the anatomical
background.

Mask datasets tend to be mostly zeros.  You can use the ability of AFNI to
read/write compressed datasets to save disk space.  See the file
"README.compression" and the plugin "BRIK compressor" for details.

To be useful, a mask dataset must be created at the resolution of the
datasets it will be used with.  This means that if you create a mask
at the anatomical resolution, the functional datasets to which you apply
it must be at that resolution also.

Averaging Data Defined by a ROI Mask: program "3dmaskave" [author: RW Cox]
--------------------------------------------------------------------------
This program lets you compute the average over a ROI of all voxel values
from an input dataset.  The ROI is defined by a mask dataset.  The average
value is computed for each sub-brick in the input, so you can use this to
create an average time series.  The output is written to stdout -- it can
be redirected (using '>') into a file.  For more information, try
"3dmaskave -help".  An alternative to this command-line program is the
similar plugin "ROI Average", which you can use interactively from
within AFNI.

Making a Dump File: plugin "3D Dump98" [author: Z Saad]
-------------------------------------------------------
This plugin lets you write to disk a list of all voxels in a dataset with
values in a certain range.  The ROI application is to list out the voxels
in the mask dataset.

Extracting Data Using a Dump File: plugin "3D+t Extract" [author: Z Saad]
-------------------------------------------------------------------------
This file lets you save all the time series from voxels listed in a mask
file.  They are in an ASCII format, which is designed to make them easier
to import into programs such as Matlab.

Converting a Mask File to a Different Resolution [author: RW Cox]
-----------------------------------------------------------------
It is most convenient to draw the ROI as a functional overlay on the same
grid as a high resolution anatomical dataset.  Applying this to a low
resolution functional dataset can be problematic.  One solution is given
below.  Another solution is to use the new (07 Feb 1999) program
"3dfractionize".  This will resample an input mask dataset created at high
resolution to the same resolution as another dataset (the "template").
See the output of "3dfractionize -help" for usage details.

=========================================================================
** The following documentation is by Michael S. Beauchamp of the NIMH. **
=========================================================================

Making Average Time Series Defined by a ROI -- MSB 7/21/98
----------------------------------------------------------
  One of the most useful ways to visualize FMRI data is an average MR time
series from a group of voxels. AFNI makes this easy, with the "ROI Average"
plug-in or the "maskave" stand-alone program. The user inputs a mask BRIK
specifying which voxels to average, and a 3d+time BRIK containing the time
series data. AFNI then outputs a text file with the average value at each
time point (and standard deviation, if desired) which can be graphed in
Excel or any other plotting program.
  Some difficulties arise when the mask BRIK and the 3d+time BRIK have
different co-ordinate spaces or voxel dimensions. For instance, when the
"Draw Dataset" plug-in is used to define an anatomical region of interest
(like a specific gyrus) on a high-resolution (e.g. SPGR) anatomical
dataset. The user then wishes to find the average time-series from all
voxels in this region. However, the echo-planar functional dataset is
collected at a lower spatial resolution (e.g. 4 mm x 4mm x 4 mm voxel size)
and a smaller volume (e.g. 24 cm x 24 cm x 12 cm) than the anatomical
dataset (e.g. 1 mm x 1 mm x 1.2 mm in a 24 x 24 x 17 cm volume). Because of
the differing voxel sizes and image volumes, the mask dataset cannot be
directly applied to the 3d+time dataset.
  To solve this problem, both the mask and 3d+time datasets are converted to
the same image volume by translation to Talairach space. 
Simplest Method:
  For the mask dataset, After the Talairach transormation is performed on
the hi-res anatomical, a transformed BRIK is written to disk (with the
"Write Anat" button), a copy made, and "draw dataset" performed on the
duplicate Talairach BRIK to make a mask in Talairach space.
Next, "Switch Underlay"/"Talairach View"/"Write Anat" is used to make a
Talairach version of the 3d+time BRIK. Then, "maskave" or "ROI Average" can
be used to make the average time series.
  Problem: the problem with this method is that a Talairach 3d+time BRIK at
the default 1 mm resolution can be enormous-- c. 1 GB. It is therefore
impractical if average time series from many subjects or tasks are needed.
Therefore, the anatomical and functional BRIKs can be sampled at a lower
resolution to decrease the disk space demands.
  More Complex Method: 
Create a duplicate of the original anatomical BRIK with 3ddup; click "Warp
Anat on Demand", and set Resam (mm) to 4. Click "Write Anat" to make a 4 mm
resampled dataset. "Draw Dataset" can be used to draw on the original
dataset before "Write Anat", or on the resampled Talaraich BRIK after
"Write Anat". However, after "Write Anat" is performed, drawing on the
original or Talaraich BRIKs will not change the other one.
Write out a Talaraich BRIK of the 3d+time dataset resampled at 4 mm (as
above). Then, "maskave" or "ROI Average" can be used to make the average
time series.



AFNI file: README.setup
Setting Up AFNI Colors and Palettes
===================================
You can set up the colors and palette tables used by AFNI in the
file .afnirc in your home directory.  This file will be read when
AFNI starts.  Each section of this file starts with a string of
the form "***NAME", where "NAME" is the name of the section.  At
present, three sections are available:
   ***COLORS      -- for defining new colors
   ***PALETTES    -- for defining the layout of colors used for
                     functional overlays (the "palette tables").
   ***ENVIRONMENT -- for defining Unix environment variables that
                     affect the way AFNI works.

Note that you can have more than one of each section in the setup
file (although there is no particular reason why this is needed).
Comments can be put in the .afnirc file using the C++ "//" style:
everything from the "//" to the end of line will be ignored.

The file AFNI.afnirc in the afni98.tgz distribution contains
an example of defining colors and using them to create palettes
for functional overlay.

Defining the Colors Available for Overlays
------------------------------------------
The "***COLORS" section is used to define colors that will be added
to the color menu that is used for functional overlays, crosshairs,
etc.  A sample is

***COLORS
   qblue = #804cff            // RGB hexadecimal color definition
   zblue = rgbi:0.5/0.3/1.0   // RGB floating point intensities

The general form of a color definition line is
   label = definition
where "label" is what you want to appear on the menu, and "definition"
is a valid X11 color definition.  (The spaces around "=" are required.)
In the first line, I have defined the label "qblue" using hexadecimal
notation for the RGB components (each one has 2 hex digits).  In the
second line, I have defined the color "zblue" using the RGB intensity
format, where each of the numbers after the string "rgbi:" is between
0.0 and 1.0 (inclusive) and indicates the intensity of the desired color
component.

Note that the file /usr/lib/X11/rgb.txt (or its equivalent) contains
the definitions of many color names that the X11 system recognizes.
See 'man XLookupColor' for more information on the many ways to define
colors to X11.

If you use a label that has been defined previously (either internally
within AFNI, or earlier in the setup file), then that color will be
redefined.  That is, you could do something stupid like
   blue = red
and AFNI won't complain at all.  ("blue" is one of the pre-defined colors
in AFNI.  I suppose you could use this 'feature' to make AFNI into some
sort of twisted Stroop test.)  Color labels are case sensitive, so
"BLUE = red" is different than "blue = red".  You cannot redefine the
label "none".

On 8 bit graphics systems (the vast majority), you must be parsimonious
when defining new colors.  You may run out of color "cells", since there
are only 2**8 = 256 available at one time.  All the colors used for
the windowing system, the buttons, the grayscale images, and the overlays
must come from this limited reservoir.  On a 12 bit system (e.g., SGI),
there are 2**12 = 4096 color cells available, which is effectively
unlimited.

Defining the Palette Tables
---------------------------
A palette is a listing of colors and separating numerical values that
are used to define a functional overlay scheme.  These are controlled
by the "***PALETTES" section in the setup file.  Each palette has a
name associated, and a number of color "panes".  For example:

***PALETTES
  rainbow [3]
    1.0 -> blue     // The spaces around "->" are required
    0.2 -> green
   -0.4 -> hotpink  // There are N lines for palette entry [N]

This defines a palette table "rainbow", and in the entry for 3 panes
sets up the pbar to have 1.0 as its maximum value, then to have the
color "blue" be assigned to the pane that runs down to 0.2, then the
color "green" assigned to the next pane running down to -0.4, and then
the color "hotpink" assigned to the last pane (which will run down to
-1.0, since the minimum value is the negative of the maximum value).

Each palette table can have palettes from 2 panes up to 20, denoted
by [2] to [20].  A palette table, can also have palettes that are
restricted to positive values only.  These are denoted by [2+] to
[20+].  An example is

  rainbow [3+]
    1.0 -> blue
    0.5 -> none
    0.1 -> limegreen

If the rainbow palette is the active one, when you switch to positive-
only function mode (using the "Pos" toggle button), and then to use 3
panes (using the "#" chooser), then the top pane will run from 1.0 to
0.5 in blue, the second pane from 0.5 to 0.1 and have no color, and the
third pane from 0.1 to 0.0 in limegreen.

It is possible to define palettes that only change the colors,
not the separating values.  This is done by using the special
word IGNORE in place of the values:

  rainbow [4+]
    IGNORE -> blue
    IGNORE -> green
    IGNORE -> hotpink
    IGNORE -> none

All of the values must be IGNORE, or none of them.  When a
palette like the one above is loaded, only the colors in the
pbar will change -- the pane heights will be left unchanged.

The New Palette Menu
--------------------
Attached to the "Inten" label atop the color pbar is a popup menu
that is activated using mouse button 3.  This menu has the following
items:

   Equalize Spacing   = Sets the spacings in the currently visible
                          palette to be uniform in size.

   Set Top Value      = Sets the top value in the currently visible
                          palette to a number you choose.  Note that
                          you will probably need to adjust the
                          "Range" control if you change the top value
                          from 1.0, since the thresholds for making
                          functional overlays are determined by
                          multiplying the pbar values times the
                          value in the "Range" or "autoRange" control.

   Read in palette    = Reads in a palette file.  This is another
                          file like .afnirc (with ***COLORS and/or
                          ***PALETTES sections).  AFNI expects such
                          files to have names that end in ".pal".
                    N.B.: New colors defined this way will NOT be visible
                          on previously created color menus (such as the
                          Crosshairs Color chooser), but will be visible
                          on menus created later.
                    N.B.: Reading in a palette that has the same name
                          as an existing one will NOT create a new one.

   Write out palette  = Writes out the currently visible palette to
                          a ".pal" file.  In this way, you can set up
                          a palette that you like, write it out, and
                          then read it back in later.  (Or you could
                          copy the data into your .afnirc file, and
                          it would be available in all later runs.)
                          The program asks you for a palette name,
                          which is also used to for the filename -- if
                          you enter "elvis" for the palette name, then
                          AFNI will write to the file "elvis.pal".  If
                          this file already exists, the palette is
                          appended to the end of the file; otherwise,
                          the file is created.

   Show Palette Table = Pops up a text window showing the definitions
                          of all the colors and palettes.  Mostly useful
                          for debugging purposes.

   Set Pal "chooser"  = A menu that lets you pick the palette table
                          that is currently active.  Note that reading
                          in a palette table does not make it active --
                          you must then choose it from this menu.  Writing
                          a palette out does not enter it into this menu.
         ======>>> N.B.:  If a palette table does not have an entry for a
                          given number of panes, then nothing will happen
                          until you use the "#" chooser to make the number
                          of panes correspond to the selected palette table.
         => 18 Sep 1998:  In versions of AFNI released after this date,
                          reading in a palette file causes the last
                          palette in that file to become the active one.
                          [Suggested by SM Rao of MCW Neuropsychology]

Unix Environment Variables [June 1999]
--------------------------------------
You can set Unix environment variables for an interactive AFNI run in
the .afnirc file.  This is done with the ***ENVIRONMENT section.  An
example is

***ENVIRONMENT
  AFNI_HINTS = YES
  AFNI_SESSTRAIL = 3

The blanks around the "=" are required, since that is how the setup
processing routine breaks lines up into pieces.  For a list of the
environment variables that affect AFNI, see README.environment.

The Future
----------
I will probably add more sections to the setup file.  Someday.  Maybe.

=======================================
| Robert W. Cox, PhD                  |
| National Institute of Mental Health |
| Bethesda, MD USA                    |
=======================================



AFNI file: README.volreg
Using 3dvolreg and 3drotate to Align Intra-Subject Inter-Session Datasets
=========================================================================
When you study the same subject on different days, to compare the datasets
gathered in different sessions, it is first necessary to align the volume
images.  This note discusses the practical difficulties posed by this
problem, and the AFNI solution.

The difficulties include:
 (A) Subject's head will be positioned differently in the scanner -- both
     in location and orientation.
 (B) Low resolution echo-planar images are harder to re-align accurately
     that high resolution SPGR images, when the subject's head is rotated.
 (C) Anatomical coverage of the slices will be different, meaning that
     exact overlap of the data from two sessions may not be possible.
 (D) The anatomical relationship between the EPI and SPGR (MP-RAGE, etc.)
     images may be different on different days.
 (E) The coordinates in the scanner used for the two scanning sessions
     may be different (e.g., slice coverage from 40I to 50S on one day,
     and from 30I to 60S on another).

(B-D) imply that simply using 3dvolreg to align the EPI data from session 2
with EPI data from session 1 won't work well.  3dvolreg's calculations are
based on matching voxel data, but if the images don't cover the same
part of the brain fully, they won't register well.

The AFNI solution is to register the SPGR images from session 2 to session 1,
to use this transformation to move the EPI data from session 2 in the same
way.  The use of the SPGR images as the "parents" gets around difficulty (B),
and is consistent with the extant AFNI processing philosophy.  The SPGR
alignment procedure specifically ignores the data at the edges of the bricks,
so that small (5%) mismatches in anatomical coverage shouldn't be important.
(This also helps eliminate problems with various artifacts that occur at the
edges of images.)

Problem (C) is addressed by zero-padding the EPI datasets in the slice-
direction.  In this way, if the EPI data from session 2 covers a somewhat
different patch of brain than from session 1, the bricks can still be made
to overlap, as long as the zero-padding is large enough to accomodate the
required data shifts.  Zero-padding can be done in one of 3 ways:
 (1) At dataset assembly time, in to3d (using the -zpad option); or
 (2) At any later time, using the program 3dZeropad; or
 (3) By 3drotate (using -gridparent with a previously zero-padded dataset).

Suppose that you have the following 4 datasets:
  S1 = SPGR from session 1    E1 = EPI from session 1
  S2 = SPGR from session 2    E2 = EPI from session 2

Then the following commands will create datasets registered from session 2
into alignment with session 1:

  3dvolreg -twopass -twodup -heptic -clipit -base S1+orig \
           -prefix S2reg S2+orig

  3drotate -heptic -clipit -rotparent S2reg+orig -gridparent E1+orig \
           -prefix E2reg E2+orig

You may want to create the datasets E1 and E2 using the -zpad option to
to3d, so that they have some buffer space on either side to allow for
mismatches in anatomical coverage in the slice direction.  Note that
the use of the "-gridparent" option to 3drotate implies that the output
dataset E2reg will be sampled to the same grid as dataset E1.  If needed,
E2reg will be zeropadded in the slice-direction to make it have the same
size as E1.

If you want to zeropad a dataset after creation, this can be done using
a command line like:

  3dZeropad -prefix E1pad -z 2 E1+orig

which will add 2 slices of zeros to each slice-direction face of each
sub-brick of dataset E1, and write the results to dataset E1pad.

*****************************************************************************

Registration Information Stored in Output Dataset Header by 3dvolreg
=====================================================================
The following attributes are stored in the header of the new dataset:

VOLREG_ROTCOM_NUM    = number of sub-bricks registered
 (1 int)               [may differ from number of sub-bricks in dataset]
                       [if "3dTcat -glueto" is used later to add images]

VOLREG_ROTCOM_xxxxxx = the string that would be input to 3drotate to
 (string)              describe the operation, as in
                   -rotate 1.000I 2.000R 3.000A -ashift 0.100S 0.200L 0.300P
                       [xxxxxx = printf("%06d",n); n=0 to ROTCOM_NUM-1]

VOLREG_MATVEC_xxxxxx = the 3x3 matrix and 3 vector of the transformation
 (12 floats)           generated by the above 3drotate parameters; if
                       U is the matrix and v the vector, then they are
                       stored in the order
                           u11 u12 u13 v1
                           u21 u22 u23 v2
                           u31 u32 u33 v3
                       If extracted from the header and stored in a file
                       in just this way (3 rows of 4 numbers), then that
                       file can be used as input to "3drotate -matvec_dicom"
                       to specify the rotation/translation.

VOLREG_CENTER_OLD    = Dicom order coordinates of the center of the input
 (3 floats)            dataset (about which the rotation takes place).

VOLREG_CENTER_BASE   = Dicom order coordinates of the center of the base
 (3 floats)            dataset.

VOLREG_BASE_IDCODE   = Dataset idcode for base dataset.
 (string)

VOLREG_BASE_NAME     = Dataset .HEAD filename for base dataset.
 (string)

These attributes can be extracted in a shell script using the program
3dAttribute, as in the csh example:

  set rcom = `3dAttribute VOLREG_ROTCOM_000000 Xreg+orig`
  3drotate $rcom -heptic -clipit -prefix Yreg Y+orig

which would apply the same rotation/translation to dataset Y+orig as was
used to produce sub-brick #0 of dataset Xreg+orig.

To see all these attributes, one could execute

  3dAttribute -all Xreg+orig | grep VOLREG

*****************************************************************************
Robert W Cox - 07 Feb 2001
National Institutes of Mental Health
rwcox@codon.nih.gov



AFNI file: README.web
Reading Datasets Across the Web
===============================
As of 26 Mar 2001, the interactive AFNI program has the ability to read
dataset files across the Web, using the HTTP or FTP protocols.  There
are two ways to use this, assuming you know a Web site from which you can
get AFNI datasets.

The first way is to specifiy individual datasets; for example

  afni -dset http://some.web.site/~fred/elvis/anat+orig.HEAD

This will fetch the single dataset, and start AFNI.

The second way is if the Web site has a list of datasets stored in a file
named AFNILIST.  If you specify this as the target for a Web dataset, AFNI
will read this file, and retrieve each dataset specified in it (one
dataset per line); for example

  afni -dset http://some.web.site/~fred/elvis/AFNILIST

where the AFNILIST file contains the lines

  anat+tlrc.HEAD
  func+tlrc.HEAD
  reference.1D

Note that the AFNILIST file can contain names of 1D timeseries files.
One way for the Web site creator to create an AFNILIST file would be to
put all the dataset files (.HEAD, .BRIK.gz, .1D) into the Web directory,
then do "ls *.HEAD *.1D > AFNILIST" in the Web directory.

The "Define Datamode" control panel has a new button "Read Web" that
will let you load datasets (or AFNILISTs) after you have started the
program.  These datasets will be loaded into the current session.
However, you cannot write out datasets read in this way.  Also, these
datasets are locked into memory, so if too many are present, your
computer system may get into trouble (i.e., don't download 10 60 MB
datasets at once).

ftp:// access is done via anonymous FTP; http:// access uses port 80.



AFNI file: README.ziad
The plugins are written for AFNI98.

plug_delay_v2   ->  Hilbert Delay98 plugin
plug_3Ddump_v2  ->  Plugin for extracting voxel values into an ascii file
plug_4Ddump     ->  Plugin for extracting voxel time series into an ascii file
plug_extract    ->  A plugin similar to plug_4Ddump that outpusts a mask brick
                    in addition to the ascii file containing voxel time series 


For help on the plugins, you need to compile them and click the help button
of the plugin.

For example, to compile plug_delay_V2 do the following:

	1- copy  plug_delay_V2.c and plug_delay_V2.h files into the directory 
	   where the AFNI98 code and makefiles are.
	2- make plug_delay_V2.so
	3- move the so file plug_delay_V2.so into the directory where the 
	   AFNI executables and other plugin so files are.

The script file @Proc.latest is a sample csh script that uses programs available 
in the AFNI98 package for processing FMRI data. 


If you need help, let me know:

E-mail: ziad@image.bien.mu.edu




AFNI file: AFNI.changes.dglen
15 November 2004:
 * Created 3dTeig.c. Program calculates eigenvalues,vectors,FA from DTI data and
     creates output brik file. Used 3dTstat.c as model for program.
 * Rename 3dTeig.c 3dDTeig.c. Made program more efficient. Reduced width of help
     to fit in 80 characters.
17 November 2004
 * Renamed some internal messages and history to have the updated function name.
2 December 2004
 * Created 3dDWItoDT.c. Program calculates diffusion tensor data from diffusion weighted images.

20 December 2004
 * Fixed bugs for initializing and freeing vectors in 3dDTeig timeseries
   function that would sometimes result in segment faults

23 December 2004
 * Automask option now working in 3dDWItoDT.

06 March 2005
 * 3dDTeig.c modified to allow input datasets of at least 6 sub-briks (not necessarily equal to 6).

28 March 2005
 * 3dDWItoDT.c modified to include non-linear gradient descent method and several new options including eigenvalue,eigenvector calculations, debug briks, cumulative wts, reweighting, verbose output

11 April 2005
 * Moved ENTRY statements in 3dDWItoDT.c to come after variable declarations in 3 functions

12 April 2005
 * Added AFNI NIML graphing of convergence to 3dDWItoDT.c with user option -drive_afni nnnnn

14 April 2005
 * Fixed bug in 3dDWItoDT.c when user requests both linear solution and eigenvalues. Removed several unused ifdef'ed debugging code sections.

20 April 2005
 * slight modifications to comments in 3dDWItoDT.c for doxygen and consistent warnings and error messages
 * Milwaukee in afni_func.c

28 April 2005
 * trivial program, 3dMax, for finding the minimum and maximum for a dataset

2 May 2005
 * updated formerly trivial program (now only semi-trivial), 3dMax, to calculate means, use a mask file, do automasking. The program now includes scale factors for sub-briks and extends the types of allowable datasets

3 May 2005
 * fixed checking of options for incompatibilities and defaults in 3dMax

4 May 2005
 * added Mean diffusivity computation to 3dDWItoDT.c and 3dDTeig.c. Also in 3dDWItoDT.c, added an additional I0 (Ideal image voxel value) sub-brik included with debug_briks option. The I0 will be used as a basis for a test model dataset. Also fixed bug in masked off area for eigenvalues when using debug_briks.

12 May 2005
 * added count, negative, positive, zero options to 3dMax and fixed bug there in the calculation of a mean with a mask
 * created 3dDTtoDWI.c to calculate ideal diffusion weighted images from diffusion tensors for testing purposes

16 May 2005
 * added tiny, tiny change to allow non_zero option with 3dMax

19 May 2005
 * added min and max limits to 3dhistog.c

27 May 2005
 * added mask option to 3dDWItoDT and fixed bug with automask for float dsets
 * added initialization to pointer in 3dMax

15 June 2005
 * removed exits in plug_3Ddump_V2.c, plug_stavg.c, plug_edit.c, plug_volreg.c, plug_L1fit.c, plug_lsqfit.c, plug_power.c to prevent plug-ins from  crashing AFNI application
 * created new program, 3dAFNItoRaw, to create raw dataset of multiple sub-brik alternating at each voxel rather than at each volume
16 June 2005
 * fixed small typo in help for 3dMax.c
24 June 2005
 * created new program, DTIStudioFibertoSegments.c, to convert DTIStudio fiber files into SUMA segment files
20 July 2005
 * fixed bug for quick option for 3dMax
1 Aug 2005
 * fixed bug in im2niml function in thd_nimlatr.c in testing for name field of images
7 Oct 2005
 * Created anisosmooth program to do anisotropical smoothing of datasets (particularly DWI data). Current version is 2D only.
18 Oct 2005
 * Added 3D option to 3danisosmooth program. Fixed some bugs with near zero gradient and NaN eigenvector values and aiv viewer split window error.
 * Fixed small bug in 3dMaskdump not allowing selection of last voxel in any dimension
19 Oct 2005
 * added support to 3dMax for checking for NaN,Inf and -Inf with the -nan and -nonan options
20 Oct 2005
 * fixed 3danisosmooth phi calculation for exponential version to use scaled eigenvalues
18 Nov 2005
 * made major changes to 3danisosmooth and DWIstructtensor to improve performance. 
   Also included changes for standardized message printing system for AFNI programs in 3danisosmooth.c, 
   DWIstructtensor.c, 3dMax.c, 3dDTeig.c, 3dDWItoDT.c
21 Nov 2005
 * fixed bug to improve efficiency of 3danisosmooth with mask datasets
22 Nov 2005
 * support for user options for level of Gaussian smoothing (-sigma1, -sigma2) in 3danisosmooth
29 Nov 2005
 * removed default author and version info for 3dMax. Now option -ver gives that
 output. 3dMax is used in scripts, so that change confuses everything.
14 Dec 2005
 * added new options to 3danisosmooth for avoiding negative numbers and
 fractional control of amount of edginess. 2D exponential method gives faster
 results because of new constant and switched phi values.
16 Dec 2005
 * added new datum option to 3danisosmooth
20 Dec 2005
 * updated edt_blur.c to improve performance of y blurring on large images (nx>=512) 
21 Dec 2005
 * minor update to edt_blur.c for slightly more compact code.
13 Jan 2006
 * added option to 3danisosmooth (-matchorig) to match range of original
 voxels in each sub-brick. 
21 Feb 2006
 * corrected some help for 1dcat program and generic help message used by other 
 1D programs. Updated help a bit for 3dmerge.c also.
22 Feb 2006
 * additional help updates for 1deval
3 Apr 2006 
 * various fixes for Draw Dataset plug-in (datum check and label errors)
20 Apr 2006
 * update for 3dcopy to support writing NIFTI datasets 
 (Rick is responsible for this)
4 May 2006
 * fix for 3dROIstats.c for nzmedian and nzmean confusion
 * erosion without redilation in thd_automask.c called in various places and
 needs an additional parameter to continue redilating.
9 May 2006
 * 3dAutomask.c and thd_automask.c - stupid typos, debugging printfs removed, ...
10 May 2006
 * JPEG compression factor environment variable in several places
19 Jun 2006
 * byte swapping support for cross-platform conversion of DTI Studio fibers in 
DTIStudioFibertoSegments.Also updated warning and error messages to AFNI
standards. Made help clearer for @DTI_studio_reposition.
21 Jun 2006
 * 3dNotes support for NIFTI file format and display of history notes
22 Jun 2006
 * 3dZcat updated to support NIFTI. edt_dsetitems had to be modified also for
 duplication of .nii or .nii.gz suffix in file names.
 * 3dDWItoDT can now make separate files for each type of output data to make it
 easier to work with other packages. Lower diagonal order used for Dtensor to
 make compliant with NIFTI standard in 3dDWItoDT and 3dDTeig.
 29 Jun 2006
 * fixed bug in edt_dsetitems.c that puts doubled .nii.nii or .nii.gz.nii.gz
 extensions on filenames in some cases
 * minor help changes in Draw Dataset plug-in (courtesy of Jill)
 23 Aug 2006
 * Updates to support NIFTI and gzipped NIFTI files in 3dZcat, 3daxialize, 3dCM,
 3dNotes.Other changes in edt_dsetitems.c to support NIFTI format better.
 * 3dDWItoDT supports Powell algorithm. 3dDTeig can read either old D tensor
 order or new NIFTI standard D tensor order. It can also write out separate
 files for eigenvalues, vectors, FA, MD (like 3dDWItoDT).
24 Oct 2006
 * Update to 3dNLfim to use memory mapping instead of shared memory, to support
 multiple CPU jobs better 
25 Oct 2006
 * 3dNLfim limit reports to every nth voxel via progress option
26 Oct 2006
 * model_zero, noise model,for removing noise modeling in 3dNLfim
07 Nov 2006
 * R1I mapping and voxel indexing support added to the DEMRI model,model_demri_3 
09 Nov 2006
 * output datum type support in 3dNLfim
08 Jan 2007
 * 1dSEM program for doing structural equation modeling
18 Jan 2007
 * 1dSEM updates for growing model size over all possible models
03 May 2007 
 * mri_read_dicom patches given and modified by Fred Tam for strange 
 Siemens DICOM headers
04 May 2007
 * minor output, option name and help changes to 1dSEM
08 May 2007
 * [with rickr] count can skip in a funny way
09 May 2007
 * minor changes to thd_mastery to allow simplified count commands in sub-brick
selectors already implemented in thd_intlist.c and slightly modified help strings
in 3ddata.h
16 May 2007
 * 1dSEM - changeable limits for connection coefficients
29 May 2007
 * oblique dataset handling. Bigger changes in mri_read_dicom, to3d, 3dWarp.
Also smaller changes in thd_niftiwrite and read, 3ddata.h, vecmat.h, 
thd_dsetatrc, thd_dsetdblk.c
04 Jun 2007
 * Initialization bug in obliquity code on some systems, other minor changes 
for obliquity too
06 Jun 2007
 * NIFTI read creates oblique transformation structure
 * minor fix to 1dSEM for tree growth stop conditions
07 Jun 2007
 * added ability for 3dWarp to obliquidate an already oblique dataset
11 Jun 2007
 * deeper searches for forest growth in 1dSEM with new leafpicker option.
Compute cost of input coefficient matrix data in 1dSEM to verify published data with
calccost option. Easier to read output data for 1dSEM (sqrmat.h)
13 Jun 2007
 * fixes for rewriting dataset header in 3dNotes, 3dCM and adwarp (effect of
deconflicting changes)
14 Jun 2007
 * fixes for obliquity handling effects on non-oblique data in places, 
most obvious in NIFTI files where the coordinates are changed as in 3drefit,
3dCM, 3drotate, 3dresample. Also fix for NIFTI reading of sform.
18 Jun 2007
 * duration, centroid and absolute sum calculations added to 3dTstat
20 Jun 2007
 * added -deoblique option to 3drefit to remove obliquity from dataset
26 Jul 2007
 * clarified help in 3dExtrema, and fixed a couple typos
02 Aug 2007
 * updated Talairach atlas for Eickhoff-Zilles 1.5 release
 * updated help in whereami for clarification
03 Aug 2007
 * user input fix for 3dAutobox limits, added -noclust option too to keep any
non-zero voxels
06 Aug 2007
 * 3dAutobox can also ignore automatic clip level
27 Aug 2007
 * modifications for 3dDWItoDT to improve handling of highly anisotropic voxels
with new hybrid search method and bug fixes
28 Aug 2007
 * added b-value and allowed 0 values in MD and FA calculations 3dDTeig and
3dDWItoDT
07 Sep 2007
 * updated viewer help to include newer keyboard and mouse shortcuts
23 Sep 2007
 * added some gray scales to overlay color scale choices, fixed small bug
on lower limit of color scales in pbar.c and pbardefs.h. Also changed lowest
index in cb_spiral, color-blind, color scale
28 Sep 2007
 * fixed a couple bugs in mri_read_dicom to add null termination to the string
containing Siemens extra info and allowed for cross-product normals for vectors to
to line up with slice positions when vectors are slightly off 1.0
02 Oct 2007
 * added memory and dataset write error checks to mri_read_dicom and to3d
03 Oct 2007
 * added non-zero mean option to 3dTstat
09 Oct 2007
 * added additional warning and error message handling to to3d
14 Dec 2007
 * added various monochrome lookup tables to overlay color scale choices
including amber, red, green and blue (azure)
23 Dec 2007
 * put warnings in when using oblique datasets in AFNI GUI and when opening
datasets elsewhere
 * added another colorscale with amber/red/blue
02 Jan 2008
 * removed obliquity warnings when deobliquing with 3dWarp or 3drefit
08 Jan 2008
 * onset, offset (around maximum) added to 3dTstat
09 Jan 2008
 * volume added to 3dBrickStat
10 Jan 2008
 * fixed bug in 3dDTeig in eigenvalue calculation (no effect on results though)
15 Jan 2008
 * modified 1D file reading to allow for colons, alphabetic strings while
maintaining support for complex (i) numbers
05 Feb 2008
 * added way to turn off pop-up warnings in afni GUI for obliquity and added 
another level of checking for obliquity transformation matrix in attributes





AFNI file: AFNI.changes.rickr
08 March 2002:
  * added plug_crender.c

21 May 2002:
  * added rickr directory containing r_idisp.[ch], r_misc.[ch],
      r_new_resam_dset.[ch] and Makefile
  * added new program 3dresample (rickr/3dresample.c)
  * modified Makefile.INCLUDE to build rickr directory

06 June 2002:
  * added @SUMA_Make_Spec_FS

20 June 2002:
  * added @make_stim_file

21 June 2002:
  * modified afni_plugin.c, NLfit_model.c and thd_get1D.c to
      validate directories
  * added rickr/AFNI.changes.rickr

01 July 2002:
  * added rai orientation to plug_crender.c
  * added plug_crender.so target to Makefile.INCLUDE for use of librickr.a

02 July 2002:
  * modified 3dresample
      - fully align dataset to the master (not just dxyz and orient)
      - removed '-zeropad' option (no longer useful with new alignment)
  * modified r_new_resma_dset.[ch]
      - r_new_resam_dset() now takes an additional mset argument, allowing
        a master alignment dataset (overriding dxyz and orient inputs)
  * modified plug_crender.c to pass NULL for the new mset argument to
      r_new_resam_dset()
  * modified @SUMA_AlignToExperiment, removing '-zeropad' argument when
      running program 3dresample

15 July 2002:
  * added @SUMA_Make_Spec_SF and @make_stim_file to SCRIPTS in Makefile.INCLUDE

29 July 2002:
  * modified plug_crender.c to allow arbitrary orientation and grid spacing
      of functional overlay (no longer needs to match underlay)
  * modified r_new_resam_dset.c to set view type to that of the master
  * updated VERSION of 3dresample to 1.2 (to note change to r_new_resam_dset)

05 August 2002:
  * modified plug_crender.c (rv 1.5) to align crosshairs with master grid
  * added ENTRY() and RETURN() statements

11 September 2002:
  * added rickr/file_tool.[ch]
  * modified rickr/Makefile and Makefile.INCLUDE to be able to build file_tool
    (note that file_tool will not yet be built automatically)
  * modified r_idisp.c to include r_idisp_vec3f()

20 September 2002:
  * modified thd_opendset.c so that HEAD/BRIK are okay in directory names
    (see 'fname' and 'offset' in THD_open_one_dataset())

26 September 2002:
  * modified plug_crender.c
      - calculate and draw crosshairs directly
      - added debugging interface (access via 'dh' in opacity box)
  * modified cox_render.[ch] - pass rotation matrix pointer to CREN_render()
  * modified testcox.c - pass NULL to CREN_render() for rotation matrix pointer

01 October 2002:
  * modified Makefile.INCLUDE to build file_tool automatically

23 October 2002:
  * modified plug_crender.c so that Incremental rotation is the default

29 October 2002:
  * modified plug_second_dataset.c and plug_nth_dataset.c to update dataset
      pointers from idcodes on a RECEIVE_DSETCHANGE notification

22 November 2002:
  * added new program Hfile, including files rickr/Hfile.[ch]
  * modified rickr/Makefile and Makefile.INCLUDE to build Hfile

27 November 2002:
  * Hfile is now Imon
  * many modifications to Imon.[ch] (formerly Hfile.[ch])
      - see rickr/Imon.c : history for version 1.2
  * renamed Hfile.[ch] to Imon.[ch]
  * modified rickr/Makefile to reflect the name change to Imon
  * modified Makefile.INCLUDE to reflect the name change to Imon

13 December 2002:
  * Imon no longer depends on Motif
      - mcw_glob.[ch] are used locally as l_mcw_glob.[ch]
      - Imon.c now depends only on l_mcw_glob.[ch]
      - rickr/Makefile now compiles Imon.c and l_mcw_glob.c
        with -DDONT_USE_MCW_MALLOC

14 January 2003:
  * update 3dresample to clear warp info before writing to disk

15 January 2003:
  * The highly anticipated release of Imon 2.0!!
      - Imon now has optional rtfeedme functionality.
      - add files rickr/realtime.[ch]
      - modified rickr/Imon.[ch]
      - modified rickr/Makefile
          o to build .o files with -DDONT_USE_MCW_MALLOC
          o to use $(EXTRA_LIBS) for sockets on solaris machines
      - modified Makefile.INCLUDE
          o Imon now also depends on rickr/realtime.[ch]
          o pass $(EXTRA_LIBS) to the make under rickr

27 January 2003:
  * modified Makefile.solaris28_gcc : defined EXTRA_LIBS_2
      (is EXTRA_LIBS without -lgen and -ldl)
  * modified Makefile.INCLUDE for Imon to use EXTRA_LIBS_2
  * modified rickr/Makefile for Imon to use EXTRA_LIBS_2

28 January 2003:
  * modified Imon.[ch] to add '-nt VOLUMES_PER_RUN' option (revision 2.1)

02 February 2003:
  * modified Imon.[ch] to fail only after 4 I-file read failures (rv 2.2)

10 February 2003:
  * added a new SUMA program, 3dSurfMaskDump
      o added files SUMA/SUMA_3dSurfMaskDump.[ch]
      o modified SUMA_Makefile to make 3dSurfMaskDump
      o modified Makefile.INCLUDE, targets:
	    suma_exec, suma_clean, suma_link, suma_install
  * modified Makefile.solaris2[67]_gcc, defining EXTRA_LIBS_2

11 February 2003:
  * minor updates to SUMA/SUMA_3dSurfMaskDump.c (for -help)
  * 3dSurfMaskDump rv 1.2: do not free structs at the end

13 February 2003:
  * 3dSurfMaskDump rv 1.2: redo rv1.2: free structs conditionally (and init)

14 February 2003:
  * 3dSurfMaskDump rv 1.3: optionally enable more SUMA debugging
  * modified Imon.[ch] (rv 2.3): added '-start_file' option

18 February 2003:
  * modified Imon.[ch] (rv 2.4), realtime.[ch]
      o added DRIVE_AFNI command to open a graph window (-nt points)
      o added '-drive_afni' option, to add to the above command
      o pass Imon command as a dataset NOTE
  * modified rickr/Makefile - added WARN_OPT

20 February 2003:
  * modified rickr/Imon.[ch] rickr/realtime.c (Imon rv 2.5)
      o appropriately deal with missing first slice of first volume
      o separate multiple DRIVE_AFNI commands
      o minor modifications to error messages

28 February 2003:
  * modified rickr/file_tool.[ch]: added '-quiet' option

25 March 2003:
  * modified Imon to version 2.6: Imon.[ch] realtime.[ch]
      o added -GERT_Reco2 option to output script
      o RT: only send good volumes to afni
      o RT: added -rev_byte_order option
      o RT: also open relevant image window
      o RT: mention starting file in NOTE command

01 May 2003:
  * modified mcw_glob.c and rickr/l_mcw_glob.c
	- removed #ifdef around #include 
  * modified imseq.c - added #include 

06 May 2003:
  * file_tool 1.3 - added interface for GEMS 4.x image files
      o added ge4_header.[ch]     - all of the processing for 4.x images
      o added options for raw data display (disp_int2, disp_int4, disp_real4)
      o modified file_tool.[ch]   - interface to ge4
      o modified rickr/Makefile   - file_tool depends on ge4_header.o
      o modified Makefile.INCLUDE - file_tool depends on ge4_header.o

09 May 2003:
  * modified 3dmaskdump.c
      o added -index option for Mike B
      o combined changes with Bob's

28 May 2003:
  * added SUMA/SUMA_3dSurf2Vol.[ch]

29 May 2003:
  * modified Makefile.INCLUDE and SUMA/SUMA_Makefile to build 3dSurf2Vol
  * 3dSurf2Vol (version 1.0) is now part of the suma build
  * file_tool version 2.0 : added ge4 study header info
      o modified ge4_header.[ch] rickr/file_tool.[ch]

03 June 2003:
  * modified ge4_header.[ch] to be called from mri_read.c
  * modified mri_read.c - added mri_read_ge4 and call from mri_read_file()
  * modified mrilib.h - added declaration for mri_read_ge4()
  * modified Makefile.INCLUDE - added ge4_header.o to MRI_OBJS for mri_read_file
  * modified file_tool (version 2.1) for slight change to ge4_read_header()

06 June 2003:
  * modified SUMA_3dSurfMaskDump.[ch]
      o now 3dSurfMaskDump version 2.0
      o re-wrote program in terms of 3dSurf2Vol, to handle varying map types
      o added 'midpoint' map function

12 June 2003:
  * modified SUMA_3dSurf2Vol.c - minor changes to help and s2v_fill_mask2()
  * modified ge4_header.c to remove "static" warnings

17 June 2003:
  * modified SUMA_3dSurfMaskDump.[ch] -> version 2.1
      o added 'ave' map function

19 June 2003:
  * modified SUMA_3dSurfMaskDump.[ch] -> version 2.2
      o added -m2_index INDEX_TYPE for the option of indexing across nodes
      o set the default of -m2_steps to 2
      o replace S2V with SMD in macros
  * modified SUMA_ParseCommands.c
      o In SUMA_FreeMessageListData(), do not free Message or Source, as
        they are added as static or local strings (but never alloc'd).

26 June 2003:
  * modified Imon.[ch], realtime.c to add axis offset functionality
      -> Imon version 2.7

27 June 2003:
  * modified Imon.c, realtime.c to pass BYTEORDER command to realtime plugin
      -> Imon version 2.8
  * modified plug_realtime.c to handle BYTEORDER command

30 June 2003:
  * modified README.realtime to provide details of the BYTEORDER command
  * modified plug_realtime.c to accept BYTEORDER for MRI_complex images

21 July 2003:
  * modified SUMA_3dSurfMaskDump.[ch] -> version 2.3
      - fixed a problem: voxels outside gpar dataset should be skipped (or
	get a special value, like 0)
      - added min/max distance output (at debug level > 0)

22 July 2003:
  * modified plug_crender.c to handle bigmode color bar (version 1.8)
      ** need to add bigmode information to widget storage
  * modified SUMA_3dSurf2Vol.[ch] -> version 1.2
      - see 3dSurfMaskDump: skip nodes outside dataset space

27 July 2003:
  * modified 3dresample.c (v1.4), file_tool.[ch] (v2.2), Imon.c (v2.9),
    realtime.[ch] (v2.9), r_idisp.[ch] (v1.2) - added CHECK_NULL_STR() to
    questionable strings for printing (old glibc doesn't print (nil))
  * modified Imon.h - increase IFM_EPSILON to 0.01 and IFM_MAX_DEBUG to 4

05 August 2003:
  * renamed SUMA_3dSurfMaskDump.[ch] to SUMA_3dVol2Surf.[ch]
  * modified Makefile.INCLUDE and SUMA/SUMA_Makefile_NoDev for 3dVol2Surf
  * modified SUMA_3dVol2Surf (major re-write -> version 3.0)
      - all output functions now go through dump_surf_3dt
      - dump_surf_3dt() is a generalized function to get an MRI_IMARR for one
        or a pair of nodes, by converting to a segment of points
      - added v2s_adjust_endpts() to apply segment endpoint modifications
      - added segment_imarr() to get the segment of points and fill the
        MRI_IMARR list (along with other info)
      - filter functions have been taken to v2s_apply_filter()
      - added min, max and seg_vals map functions (filters)
      - added options of the form -f_pX_XX to adjust segment endpoints
      - added -dnode option for specific node debugging
      - changed -output option to -out_1D
      - added new debug info
      - added checking of surface order (process from inner to outer)
  * modified Imon (-> v2.10): added '-sp SLICE_PATTERN' option

14 August 2003:
  * modified Imon.[ch], realtime.h:
      - added '-quit' option
      - allow both 'I.*' and 'i.*' filenames
      
15 August 2003:
  * modified 3dDeconvolve.c   - only output timing with -jobs option
  * modified Makefile.INCLUDE - fix cygwin compile
      - created PROGRAM_EXE targets for Imon.exe, file_tool.exe, 3dresample.exe

20 August 2003:
  * modified Imon.c (-> v3.0)  - retest errors before reporting them
      - major version change for high numbers, plus new warning output

02 September 2003:
  * modified Imon.c (->v3.1) - added '-gert_outdir OUTPUT_DIR' option

08 September 2003:
  * modified L_CREATE_SPEC write error to name correct directory

11 September 2003:
  * modified 3dfim+.c: read_one_time_series() was still using old 'filename'

17 September 2003:
  * modified SUMA_3dVol2Surf.c: fixed help info for '-cmask option'

21 September 2003:
  * modified SUMA_3dVol2Surf.c:
      - added max_abs mapping function
      - added '-oob_index' and '-oob_value' options
      - added CHECK_NULL_STR macro

23 September 2003:
  * modified SUMA_3dVol2Surf.c: added help for -no_header option

01 October 2003:
  * modified SUMA_3dVol2Surf.c: added -oom_value option and help example

02 October 2003:
  * major upgrades to 3dSurf2Vol (-> v2.0)
      - changes accepting surface data, surface coordinates, output data type,
        debug options, multiple sub-brick output, and segment alterations
      - added the following options:
	  '-surf_xyz_1D', '-sdata_1D', '-data_expr', '-datum', '-dnode',
	  '-dvoxel', '-f_index', '-f_p1_fr', '-f_pn_fr', '-f_p1_mm', '-f_pn_mm'

06 October 2003:
  * modified 2dImReg.c: if nsl == 0, use nzz for num_slices

07 October 2003:
  * modified plug_roiedit.[ch]: old/new -> Bold/Bnew for C++ compilation

08 October 2003:
  * modified @SUMA_AlignToExperiment to use tcsh instead of csh (for $#)

20 October 2003:
  * modified SUMA files SUMA_Load_Surface_Object.[ch] SUMA_MiscFunc.[ch] and
    SUMA_Surface_IO.[ch] to make non-error output optional via a debug flag
      - renamed the following functions to XXX_eng (engine functions):
          SUMA_Load_Surface_Object, SUMA_LoadSpec, SUMA_SurfaceMetrics,
          SUMA_Make_Edge_List, SUMA_FreeSurfer_Read
      - wrote functions with original names to call engines with debug
        flags set
  * modified SUMA_3dVol2Surf.c to call the new SUMA_LoadSpec_eng()   (-> v3.5)
  * modified SUMA_3dSurf2Vol.c to call the new SUMA_LoadSpec_eng()   (-> v2.1)
  * modified rickr/r_idisp.c to handle new ALLOW_DATASET_VLIST macro (-> v1.3)

21 October 2003:
  * modified SUMA_3dVol2Surf.c to complete the -f_keep_surf_order option
    (-> v3.6)

30 October 2003:
  * modified 3dbucket.c to search for trailing view type extension from end
    (under -glueto option processing)
  * modified plug_realtime.c to compute function on registered data

05 November 2003:
  * modified SUMA_3dVol2Surf.c to include ENTRY() stuff (3dVol2Surf -> v3.7)

07 November 2003:
  * Added SUMA_SurfMeasures.[ch] -> SurfMeasures (v0.2)
      - this is not a release version (this checkin is for backup)
      - supported functions are coord_A, coord_B, n_area_A, n_area_B,
        nodes, node_vol and thick

14 November 2003:
  * updates to SurfMeasures (v0.3 - not yet released)

19 November 2003:
  * more updates to SurfMeasures (v0.5)

01 December 2003:
  * finally!!  SurfMeasures is ready for release (v1.0)
      - checked in v1.0 of SUMA/SUMA_SurfMeasures.[ch]
  * modified Makefile.INCLUDE for SurfMeasures
  * modified SUMA/SUMA_Makefile_NoDev for SurfMeasures

03 December 2003
  * modified SUMA/SUMA_SurfMeasures.[ch]  (v1.2)
      - added '-cmask' and '-nodes_1D' options

16 December 2003
  * modified SUMA_Load_Surface_Object.[ch]
      - added functions: SUMA_spec_select_surfs(), SUMA_swap_spec_entries(),
        SUMA_unique_name_ind(), SUMA_coord_file(), swap_strings()
      - made change to restrict spec struct (and therefore surface loading)
        to surfaces named in a list
  * modified SUMA_SurfMeasures.[ch] (-> SurfMeasures v1.3)
      - added '-surf_A' and '-surf_B' to specify surfaces from the spec file
        (goes through new function SUMA_spec_select_surfs())
      - fixed loss of default node indices (from -nodes_1D change)
      - added '-hist' option
      - display angle averages only if at least 1 total is computed
  * modified SUMA_3dVol2Surf.[ch] (-> 3dVol2Surf v3.8)
      - added '-surf_A' and '-surf_B' to specify surfaces from the spec file
      - depreciated option '-kso'
      - added '-hist' option
  
18 December 2003
  * modified SUMA_3dSurf2Vol[ch] (-> 3dSurf2Vol v2.2)
      - added '-surf_A' and '-surf_B' to specify surfaces from the spec file
      - added '-hist' option
  * modified SUMA_3dSurf2Vol[ch] (-> 3dSurf2Vol v3.0)
      - removed requirement of 2 surfaces for most functions
        (this was not supposed to be so easy)

22 December 2003
  * modified afni_graph.[ch] to add Mean and Sigma to bottom of graph window

07 January 2004
  * modified 3dresample.c
      - added suggestion of 3dfractionize to -help output
      - added -hist option

13 January 2004
  * modified Imon.[ch] realtime.[ch]
      - added '-zorder ORDER' option for slice patterns in real-time mode
        (the default has been changed from 'seq' to 'alt')
      - add '-hist' option

22 January 2004
  * modified SUMA_3dVol2Surf.[ch]  (-> 3dVol2Surf v3.9)
      - added use of normals to compute segments, instead of second surface
        (see options '-use_norms', '-norm_len', '-keep_norm_dir')
      - reversed order of '-hist' output
  * modified SUMA_SurfMeasures.[ch] (-> SurfMeasures v1.4)
      - fixed node coord output error when '-nodes_1D' gets used
      - added '-sv' option to examples (recommended)
      - reversed order of '-hist' output

23 January 2004
  * modified SUMA_3dVol2Surf.c, SUMA_3dSurf2Vol.c and SUMA_SurfMeasures.c
             ( -> v4.0            -> v3.1              -> v 1.5 )
      - SUMA_isINHmappable() is depricated, check with AnatCorrect field

29 January 2004
  * modified plug_realtime.c :
      - allow 100 chars in root_prefix via PREFIX (from 31)
      - x-axis of 3-D motion graphs changed from time to reps
      - plot_ts_... functions now use reg_rep for x-axis values
      - reg_graph_xr is no longer scaled by TR
      - added (float *)reg_rep, for graphing with x == rep num
      - added RT_set_grapher_pinnums(), to call more than once
      - added GRAPH_XRANGE and GRAPH_YRANGE command strings for control over
        the scales of the motion graph
      - if GRAPH_XRANGE and GRAPH_YRANGE commands are both passed, do not
        display the final (scaled) motion graph
  * modified README.realtime with details on GRAPH_XRANGE and GRAPH_YRANGE

10 February 2004:
  * modified  SUMA_3dSurf2Vol.c (-> v3.2) to add debug output for AnatCorrect
  * modified  SUMA_3dVol2Surf.c (-> v4.1) to add debug output for AnatCorrect

11 February 2004:
  * modified  SUMA_SurfMeasures.c (-> v1.6) to add debug output for AnatCorrect

13 February 2004:
  * modified README.realtime to include the GRAPH_EXPR command
  * modified plug_realtime.c:
      - added RT_MAX_PREFIX for incoming PREFIX command
      - if GRAPH_XRANGE or GRAPH_YRANGE is given, disable respective 'pushing'
      - added GRAPH_EXPR command, as explained in README.realtime
      - added parser functionality to convert 6 graphs to 1 via the expression
  * modified Imon.[ch], realtime.[ch] -> (Imon v3.3)
      - added '-rt_cmd' option for passing commands to the realtime plugin
      - the '-drive_cmd' option can now be used multiple times
      - the realtime zorder is defaulting to seq again (affects physical order)
      - passed lists of drive and RT commands to realtime plugin

18 February 2004:
  * modified SUMA_3dVol2Surf.[ch] (->v4.2) 
      - add functionality for mapping functions that require sorting
      - added mapping functions 'median' and 'mode'

19 February 2004:
  * modified SUMA_3dVol2Surf.[ch] (->v4.3) to track 1dindex sources 

20 February 2004:
  * modified plug_maxima.c
      - added ENTRY/RETURN calls
      - error: do not process last plane in find_local_maxima()

23 February 2004:
  * modified mri_dup.c to allow NN interpolation if AFNI_IMAGE_ZOOM_NN is Y
  * modified afni_pplug_env.c to add control for AFNI_IMAGE_ZOOM_NN
  * modified README.environment to add a description of AFNI_IMAGE_ZOOM_NN
  * modified SUMA_SurfMeasures.[ch] to add functions:
      'n_avearea_A', 'n_avearea_B', 'n_ntri'

01 March 2004:
  * fixed mbig.c (was using AFMALL() without #include), left malloc() for speed

04 March 2004:
  * modified 3dresample.c to check RESAM_shortstr, reversed history

08 March 2004:
  * modified 3dFWHM.c to output NO_VALUE when results cannot be computed

15 March 2004:
  * modified 3dfim+.c: init sum to 0.0 in set_fim_thr_level()

17 March 2004:
  * modified file_tool.[ch] (-> v3.0), adding binary data editing
    (this was the original goal of the program, yet it took 18 months...)
      - added ability to modify 1, 2 or 4-byte signed or unsigned ints
      - added ability to modify 4 or 8-byte reals (floats or doubles)
      - added '-ge_off' option to display file offsets for certain GE fields
      - added '-hist' option to display module history

24 March 2004:
  * modified file_tool.c (-> v3.2), only check max length for mods

30 March 2004:
  * made history notes of ziad's added argument to SUMA_LoadSpec_eng()
      - 3dSurf2Vol (v3.3), 3dVol2Surf (v4.4), SurfMeasures (v1.8)

31 March 2004:
  * added rickr/serial_helper.c (-> v1.0)
      - this tcp server passes registration correction params to a serial port
  * modified plug_realtime.c
      - added the ability to pass registration correction parameters over
        a tcp socket (see 'serial_helper -help')
  * modified afni_pplug_env.c, adding AFNI_REALTIME_MP_HOST_PORT
  * modified README.environment, describing AFNI_REALTIME_MP_HOST_PORT
  * modified Makefile.INCLUDE, for building serial_helper
  * modified rickr/Makefile, for building serial_helper

01 April 2004:
  * modified rickr/serial_helper.c (-> v1.2)
      - adding a little more help
      - checking for bad options
      
02 April 2004:
  * modified rickr/serial_helper.c [request of tross] (-> v1.3)
      - change default min to -12.7, and use -128 for serial start signal
  * modified plug_realtime.so [request of tross]
      - move RT_mp_comm_close() out of check for resize plot

07 April 2004:
  * modified SUMA_3dVol2Surf.c (-> v4.5) to fix default direction of normals
  * modified serial_helper.c (-> v1.4) to #include sys/file.h for Solaris
  * modified Makefile.INCLUDE to pass EXTRA_LIBS_2 for serial_helper build
  * modified rickr/Makefile to apply EXTRA_LIBS_2 for Solaris build

13 May 2004:
  * added -NN help info to 3drotate.c

17 May 2004:
  * modified edt_dsetitems.c: THD_deplus_prefix() to remove only the basic
    three extensions: +orig, +acpc, +tlrc (blame Shruti)

18 May 2004:
  * modified SUMA_3dVol2Surf.[ch] (-> v5.0)
      - allow niml output via '-out_niml'
      - accept '-first_node' and '-last_node' options for restricted output

19 May 2004:
  * modified coxplot/plot_ts.c:init_colors() to start with color 0 (not 1)
    (allows users to modify ...COLOR_01, too, matching README.environment)

20 May 2004:
  * modified SUMA_3dVol2Surf.c (-> v5.1)
      - Ziad reminded me to add help for options '-first_node' and '-last_node'

07 June 2004:
  * modified @RenamePanga - subtract 1 from init of Nwhatapain

21 June 2004:
  * modified SUMA_3dSurf2Vol.[ch] (-> v3.4): fixed -surf_xyz_1D option

07 July 2004:
  * modified 3dROIstats - added -minmax, -nzminmax options

20 July 2004:
  * modified 3dANOVA3.c to fix stack space problem (see N_INDEX)

22 July 2004:
  * modified SUMA_3dSurf2Vol.c (-> v3.5) fixed bug in sdata_1D file test

26 July 2004:
  * modified thd_mastery.c
	- changed THD_setup_mastery() to return int
	- added THD_copy_dset_subs(), to copy a list of sub-bricks
  * modified 3ddata.h: added declaration for THD_copy_dset_subs()
  * modified r_new_resam_dset.[ch], taking new sublist parameter
  * modified 3dresample.c (-> v1.7), passing NULL for sublist
  * modified plug_crender.c (-> v1.9a), temporary update, passing sublist NULL

27 July 2004:
  * modified plug_crender.c (-> v1.9) to resample only appropriate sub-bricks

28 July 2004:
  * modifed SUMA_3dSurf2Vol.c (-> v3.6), fixed bug where a previous change
    caused the default f_steps to revert from 2 to 1 (discovered by Kuba)

02 August 2004:
  * modified SUMA_SurfMeasures.c (-> v1.9), do not require anat correct
  * modified SUMA_glxdino.c, cast each 3rd gluTessCallback arg as _GLUfuncptr
    (some 64-bit machines have complained)

03 August 2004:
  * modified f2cdir/rawio.h, hiding read/write declarations for 64-bit machines
  * added Makefile.solaris9_suncc_64
  * added Makefile.linux_gcc33_64 (for Fedora Core 2, x86-64)
  * modified SUMA_glxdino.c, SUMA_pixmap2eps.c to cast gluTessCallback()'s
    3rd argument only in the case of the LINUX2 define

11 August 2004:
  * SUMA_SurfMeasures.c (-> v1.10) to warn users about ~10% inaccuracy in volume

26 August 2004:
  * modified FD2.c: added -swap_yes and -swap_no options

27 August 2004:
  * modified FD2.c: replace -swap_yes and -swap_no with a plain -swap
		    (this alters the original program!)

01 September 2004:
  * modified 3dVol2Surf (-> v6.0)
    - created vol2surf() library files vol2surf.[ch] from core functions
    - this represents a significant re-write of many existing functions,
      modifying locations of action, structure names/contents, etc.
    - add library to libmri (as this will end up in afni proper)
    - separate all vol2surf.[ch] functions from SUMA_3dVol2surf.[ch]
    - keep allocation/free action of results struct within library
    - now using SUMA_surface struct for surface info (replace node_list)
    - added main vol2surf(), afni_vol2surf(), free_v2s_results(),
      and disp...() functions as vol2surf library interface
    - added options to control column output (-skip_col_NAME)
    - added -v2s_hist option for library history access
  * modified Makefile.INCLUDE to put the vol2surf functions in libmri
    - added vol2surf.o into CS_OBJS and vol2surf.h into LIBHEADERS
  * added vol2surf.[ch] into the src directory (for libmri)

02 September 2004:
  * modified 3dROIstats.c to fix the minmax initializer
  * modified vol2surf.[ch] SUMA_3dVol2Surf.[ch] (-> v6.1) : library shuffle

09 September 2004:
  * added plug_vol2surf.c: for setting the internal volume to surface options
  * modifed afni_plugin.[ch]: added function PLUTO_set_v2s_addrs()
  * modified vol2surf.c
    - in afni_vol2surf(), show options on debug
    - allow first_node > last_node if last is 0 (default to n-1)

17 September 2004:
  * modified SUMA_3dVol2Surf.[ch] (-> v6.2):
    - added -gp_index and -reverse_norm_dir options
  * modified vol2surf.[ch]: added support for gp_index and altered norm_dir

23 September 2004:
  * modified Makefile.linux_gcc33_64 for static Motif under /usr/X11R6/lib64

28 September 2004:
  * modified thd_coords.c and 3ddata.h, adding THD_3dmm_to_3dind_no_wod()

04 October 2004:
  * added afni_vol2surf.c: for computing SUMA_irgba from v2s_results
  * modified afni_niml.c:
    - if gv2s_plug_opts.ready, call AFNI_vol2surf_func_overlay()
    - use saved_map in case of calling vol2surf twice, indentically
    - only send nvtot and nvused to suma via AFNI_vnlist_func_overlay()
  * modified Makefile.INCLUDE: added afni_vol2surf.o to AFOBJS
  * modified plug_vol2surf.c:
    - now set global ready if all is well
    - clear norms if not in use
    - name all local functions PV2S_*
    - if debug > 0, display chosen surfaces in terminal
    - if debug > 1, display all possible surfaces in terminal
    - allow oob and oom values to be arbitrary
    - on debug, output complete surface listing in PV2S_check_surfaces()
  * modified vol2surf.c:
    - added thd_mask_from_brick()
    - added compact_results(), in case nalloc > nused
    - added realloc_ints() and realloc_vals_list()
    - in afni_vol2surf(), if 1 surf and no norms, set steps to 1
    - in set_surf_results(), pass gp_index to v2s_apply_filter
    - in segment_imarr()
        o  changed THD_3dmm_to_3dind() to new THD_3dmm_to_3dind_no_wod()
        o  if THD_extract_series() fails, report an error
    - in init_seg_endpoints()
        o  get rid of p1 and pn
        o  save THD_dicomm_to_3dmm() until the end

06 October 2004:
  * modified afni.h: added AFNI_vol2surf_func_overlay() prototype
  * modified afni_niml.c:AFNI_process_niml_data()
    - added case for name "SUMA_node_normals" via SUMA_add_norms_xyz()
  * modified afni_suma.h: added SUMA_add_norms_xyz() prototype
  * modified afni_suma.c: added SUMA_add_norms_xyz() function
  * modified SUMA_SurfMeasures.c (->v1.11): to mention 'SurfPatch -vol'

07 October 2004:
  * modified afni_plugin.h: fixed extern name PLUTO_set_v2s_addrs()
  * modified afni.h: changed prototype for AFNI_vol2surf_func_overlay()
  * modified afni_niml.c
    - most of the file is part of a diff, beware...
    - received local_domain_parent and ID from suma
    - added local struct types ldp_surf_list and LDP_list
    - in AFNI_process_NIML_data(), broke process_NIML_TYPE blocks out
      as separate functions
    - added process_NIML_SUMA_node_normals()
    - modified AFNI_niml_redisplay_CB() to process surfaces over a list
      of local domain parents
    - added fill_ldp_surf_list(), to create an LDP list from the surfaces
    - added disp_ldp_surf_list(), for debug
  * modified afni_vol2surf.c
    - new params surfA, surfB, use_defaults for AFNI_vol2surf_func_overlay()
    - pass use_defaults to afni_vol2surf()
  * modified plug_vol2surf.c
    - added second surface pair to globals
    - small help and hint changes
    - fixed receive order of fr and mm offsets
    - verify that surface pairs have matching LDPs
    - added PV2S_disp_afni_surfaces() to list all surfaces w/indices
  * modified vol2surf.[ch]
    - added disp_v2s_plugin_opts()
    - dealt with default v2s mapping of surface pairs
    - added fill_sopt_default()
    - moved v2s_write_outfile_*() here, with print_header()
    - in afni_vol2surf(), actually write output files
  * modified afni_suma.[ch]
    - change idcode_domaingroup to idcode_ldp
    - add char label_ldp[64]
    - init label_ldp and idcode_ldp
  * modified SUMA_3dVol2Surf.[ch] (-> v6.3)
    - in suma2afni_surf() deal with LDP changes to SUMA_surface
    - changed write_outfile functions to v2s_* and moved them to library

25 October 2004:
  * modified afni_niml.c
    - use vol2surf for all surfaces now
    - so nvused is no longer computed
    - in ldp_surf_list, added _ldp suffix to idcode and label
    -                   added full_label_ldp for user clarity
    - added functions int_list_posn, slist_choose_surfs,
      slist_check_user_surfs and slist_surfs_for_ldp to
      handle an arbitrary number of surfaces per LDP
    - moved old debug off margin
    - pass data/threshold pointers to AFNI_vol2surf_func_overlay()
    - pass threshold element with rthresh
    - prepare for sending data to suma (but must still define new NIML type)
      can get data and global threshold from vol2surf
    - for users, try to track actual LDP label in full_label_ldp
    - allow absolute thresholding in thd_mask_from_brick()
  * modified plug_vol2surf.c
    - make sure the surface pairs are actually different
    - make sure surfaces have the same number of nodes
    - process all parameters, but only complain if "ready"
    - always pass along debug/dnode
  * modified afni_vol2surf.c:AFNI_vol2surf_func_overlay():
    - pass Rdata and Rthr pointers, to optionally return data and thresh
    - require absolute thresholding for vol2surf mask
  * modified afni.h
    - updated AFNI_vol2surf_func_overlay() prototype
  * modified vol2surf.c
    - apply debug and dnode, even for defaults
    - if the user sets dnode, then skip any (debug > 0) tests for it
    - check for out of bounds, even if an endpoint is in (e.g. midpoint)

01 November 2004:
  * modified nifti1.h, correcting 3 small errors in the descriptions:
    - integers from 0 to 2^24 can be represented with a 24 bit mantissa
    - we require that a = sqrt(1.0-b*b-c*c-d*d) be nonnegative
    - [a,b,0,0] * [0,0,0,1] = [0,0,-b,a]
  * modified plug_maxima.[ch]
    - remove restrictions on threshold input
    - rearrange options, and add a Debug Level
    - increment style (should be in {1,2}, not {0,1}
    - add a little debug output, including show_point_list_s()
    - removed unused variables
    - true_max update in find_local_maxima()
    - added check for warp-on-demand failure

16 November 2004:
  * modified nifti1_io.[ch] nifti1_test.c to include changes from M Jenkinson
    - also modified nifti_validfilename, nifti_makebasename and
      added nifti_find_file_extension
  * added znzlib directory containing config.h Makefile znzlib.[ch]
    (unmodified from Mark Jenkinson, except not to define USE_ZLIB)
  * modified Makefile.INCLUDE to link znzlib.o into nifti1_test and
    with the CS_OBJS in libmri.a

03 December 2004:
  * modified nifti1_io.[ch]:
    - note: header extensions are not yet checked for
    - added formatted history as global string (for printing)
    - added nifti_disp_lib_hist(), to display the nifti library history
    - added nifti_disp_lib_version(), to display the nifti library version
    - re-wrote nifti_findhdrname()
        o used nifti_find_file_extension()
        o changed order of file tests (default is .nii, depends on input)
        o free hdrname on failure
    - made similar changes to nifti_findimgname()
    - check for NULL return from nifti_findhdrname() calls
    - removed most of ERREX() macros
    - modified nifti_image_read()
        o added debug info and error checking (on gni_debug > 0, only)
        o fail if workingname is NULL
        o check for failure to open header file
        o free workingname on failure
        o check for failure of nifti_image_load()
        o check for failure of nifti_convert_nhdr2nim()
    - changed nifti_image_load() to int, and check nifti_read_buffer return
    - changed nifti_read_buffer() to fail on short read, and to count float
      fixes (to print on debug)
    - changed nifti_image_infodump to print to stderr
    - updated function header comments, or moved comments above header
    - removed const keyword, changed nifti_image_load() to int, and
      added LNI_FERR() macro for error reporting on input files
  * modified nifti1_test.c
    - if debug, print header and image filenames before changing them
    - added -nifti_hist and -nifti_ver options

06 December 2004:
  * added list_struct.[ch] to create TYPE_list structures (for nifti, etc.)
    (see float_list, for example)
  * modified mrilib.h to #include list_struct.h
  * modified Makefile.INCLUDE, adding list_struct.o to CS_OBJS
  * modified vol2surf.c, changing float_list to float_list_t

10 December 2004:   added header extensions to nifti library (v 0.4)
  * in nifti1_io.h:
      - added num_ext and ext_list to the definition of nifti_image
      - made many functions static (more to follow)
      - added LNI_MAX_NIA_EXT_LEN, for max nifti_type 3 extension length
  * added __DATE__ to version output in nifti_disp_lib_version()
  * added nifti_disp_matrix_orient() to print orientation information
  * added '.nia' as a valid file extension in nifti_find_file_extension()
  * added much more debug output
  * in nifti_image_read(), in the case of an ASCII header, check for
  * extensions after the end of the header
  * added nifti_read_extensions() function
  * added nifti_read_next_extension() function
  * added nifti_add_exten_to_list() function
  * added nifti_valid_extension() function
  * added nifti_write_extensions() function
  * added nifti_extension_size() function
  * in nifti_set_iname_offest():
      - adjust offset by the extension size and the extender size
      - fixed the 'ceiling modulo 16' computation
  * in nifti_image_write_hdr_img2():
      - added extension writing
      - check for NULL return from nifti_findimgname()
  * include number of extensions in nifti_image_to_ascii() output
  * in nifti_image_from_ascii():
      - return bytes_read as a parameter, computed from the final spos
      - extract num_ext from ASCII header

11 Dec 2004:
  * added a description of the default operation to the Help in plug_vol2surf.c

14 Dec 2004: added loading a brick list to nifti1 library (v 0.5)
  * added nifti_brick_list type to nifti1_io.h, along with new prototypes
  * added main nifti_image_read_bricks() function, with description
  * added nifti_image_load_bricks() - library function (requires nim)
  * added valid_nifti_brick_list() - library function
  * added free_NBL() - library function
  * added update_nifti_image_for_brick_list() for dimension update
  * added nifti_load_NBL_bricks(), nifti_alloc_NBL_mem(),
          nifti_copynsort() and force_positive() (static functions)
  * in nifti_image_read(), check for failed load only if read_data is set
  * broke most of nifti_image_load() into nifti_image_load_prep()

15 Dec 2004: (v 0.6) added writing a brick list to nifti library, and
             and nifti library files under a new nifti directory
  * modified nifti1_io.[ch]:
      - nifti_read_extensions(): print no offset warning for nifti_type 3
      - nifti_write_all_data():
          o pass nifti_brick_list * NBL, for optional writing
          o if NBL, write each sub-brick, sequentially
      - nifti_set_iname_offset(): case 1 must have sizeof() cast to int
      - pass NBL to nifti_image_write_hdr_img2(), and allow NBL or data
      - added nifti_image_write_bricks() wrapper for ...write_hdr_img2()
      - prepared for compression use
  * modified Makefile.INCLUDE to use nifti directory (and for afni_src.tgz)
  * renamed znzlib directory to nifti
  * moved nifti1.h, nifti1_io.c, nifti1_io.h and nifti1_test.c under nifti
  * modified thd_analyzeread.c and thd_niftiread.c: nifti1_io.h is under nifti

16 Dec 2004:
  * moved nifti_stats.c into the nifti directory
  * modified Makefile.INCLUDE to compile nifti_stats from the nifti dir
  * nifti1.io.[ch] (v 0.7): minor changes to extension reading 

21 Dec 2004: nifti library update (v 0.8)
  * in nifti_image_read(), compute bytes for extensions (see remaining)
  * in nifti_read_extensions(), pass 'remain' as space for extensions,
      pass it to nifti_read_next_ext(), and update for each one read
  * in nifti_valid_extension(), require (size <= remain)
  * in update_nifti_image_brick_list(), update nvox
  * in nifti_image_load_bricks(), make explicit check for nbricks <= 0
  * in int_force_positive(), check for (!list)
  * in swap_nifti_header(), swap sizeof_hdr, and reorder to struct order
  * change get_filesize functions to signed ( < 0 is no file or error )
  * in nifti_valid_filename(), lose redundant (len < 0) check
  * make print_hex_vals() static
  * in disp_nifti_1_header, restrict string field widths

23 Dec 2004: nifti library update (v 0.9) - minor updates
  * broke ASCII header reading out of nifti_image_read(), into new
      functions has_ascii_header() and read_ascii_image()
  * check image_read failure and znzseek failure
  * altered some debug output
  * nifti_write_all_data() now returns an int

29 Dec 2004: nifti library update (v 0.10)
  * renamed nifti_valid_extension() to nifti_check_extension()
  * added functions nifti_makehdrname() and nifti_makeimgname()
  * added function valid_nifti_extensions()
  * in nifti_write_extensions(), check for validity before writing
  * rewrote nifti_image_write_hdr_img2():
      - set write_data and leave_open flags from write_opts
      - add debug print statements
      - use nifti_write_ascii_image() for the ascii case
      - rewrote the logic of all cases to be easier to follow
  * broke out code as nifti_write_ascii_image() function
  * added debug to top-level write functions, and free the znzFile
  * removed unused internal function nifti_image_open()
  * modified Makefiles for optional zlib compilation
    on:  Makefile.linux_gcc32, Makefile.linux_gcc33_64, Makefile.macosx_10.3_G5
    off: Makefile.linux_glibc22 Makefile.macosx_10.2, Makefile.macosx_10.3,
         Makefile.solaris29_suncc, Makefile.solaris9_suncc_64, Makefile.BSD,
         Makefile.darwin, Makefile.cygwin, Makefile.FreeBSD, Makefile.linuxPPC,
         Makefile.sgi10k_6.5, Makefile.sgi10k_6.5_gcc, Makefile.sgi5k_6.3,
         Makefile.solaris28_gcc, Makefile.solaris28_suncc, Makefile.solaris_gcc,
         Makefile.sparc5_2.5, Makefile.sparky, Makefile.sunultra

30 Dec 2004: nifti library and start of nifti_tool
  * modified nifti/nifti1_io.[ch] (library v 1.11)
      - moved static function prototypes from header to C file
      - free extensions in nifti_image_free()
  * added nifti/nifti_tool.[ch] (v 0.1) for new program, nifti_tool
  * modified Makefile.INCLUDE to compile nifti_tool
  * modified nifti/Makefile to compile nifti_stats nifti_tool and nifti1_test

03 Jan 2005:
  * modified mri_read.c:mri_imcount() to check for ':' after "3D"

04 Jan 2005:
  * afni_niml.c to allow changing the nodes for a surface, made receive
      message default to the terminal window
  * added a description of AFNI_SHOW_SURF_POPUPS in README.environment

07 Jan 2005: INITIAL RELEASE OF NIFTI LIBRARY (v 1.0)
  * added function nifti_set_filenames()
  * added function nifti_read_header()
  * added static function nhdr_looks_good()
  * added static function need_nhdr_swap()
  * exported nifti_add_exten_to_list symbol
  * fixed #bytes written in nifti_write_extensions()
  * only modify offset if it is too small (nifti_set_iname_offset)
  * added nifti_type 3 to nifti_makehdrname and nifti_makeimgname
  * added function nifti_set_filenames()
  * nifti library release 1.1: swap header in nifti_read_header()
07 Jan 2005: INITIAL RELEASE OF nifti_tool (v 1.0)
  * lots of functions
  * modified Makefile.INCLUDE to compile nifti_test, nifti_stats and
      nifti1_test automatically

11 Jan 2005:
  * modified afni_niml.c: slist_choose_surfs() check_user_surfs on nsurf == 1

14 Jan 2005: nifti_tool v1.1:
  * changed all non-error/non-debug output from stderr to stdout
      note: creates a mis-match between normal output and debug messages
  * modified act_diff_hdrs and act_diff_nims to do the processing in
      lower-level functions
  * added functions diff_hdrs, diff_hdrs_list, diff_nims, diff_nims_list
  * added function get_field, to return a struct pointer via a fieldname
  * made 'quiet' output more quiet (no description on output)
  * made hdr and nim_fields arrays global, so do not pass in main()
  * return (from main()) after first act_diff() difference

21 Jan 2005:
  * modified Makefile.INCLUDE per the request of Vinai Roopchansingh,
      adding $(IFLAGS) to the CC line for compiling whereami
  * submitted the updated plug_permtest.c from Matthew Belmonte

10 Feb 2005:
  * modified nifti1.h and nifti1_io.[ch] for Kate Fissell's doxygen updates
  * modified nifti1.h: added doxygen comments for extension structs
  * modified nifti1_io.h: put most #defines in #ifdef _NIFTI1_IO_C_ block
  * modified nifti1_io.c:
      - added a doxygen-style description to every exported function
      - added doxygen-style comments within some functions
      - re-exported many znzFile functions that I had made static
      - re-added nifti_image_open (sorry, Mark)
      - every exported function now has 'nifti' in the name (19 functions)
      - made sure every alloc() has a failure test
      - added nifti_copy_extensions function, for use in nifti_copy_nim_info
      - nifti_is_gzfile: added initial strlen test
      - nifti_set_filenames: added set_byte_order parameter option
        (it seems appropriate to set the BO when new files are associated)
      - disp_nifti_1_header: prints to stdout (a.o.t. stderr), with fflush
  * modified thd_niftiread.c to call nifti_swap_Nbytes (nifti_ is new)

14 Feb 2005:
  * modified plug_maxima.[ch]:
      - added 'Sphere Values' and 'Dicom Coords' interface options

16 Feb 2005:
  * modified 3dROIstats, added the -mask_f2short option

23 Feb 2005:
  * merged Kate's, Mark's and my own nifti code, and made other revisions
  * removed contents of nifti directory, and re-created it with the source
      tree from sourceforge.net
  * modified Makefile.INCLUDE to deal with the new nifti directories
  * modifed thd_analyzeread.c and thd_niftiread.c not to use include directories
      - they now have explicit targets in Makefile.INCLUDE

07 Mar 2005:
  * modified thd_coords.c: added THD_3dind_to_3dmm_no_wod()
  * modified 3ddata.h: added THD_3dind_to_3dmm_no_wod declaration
  * modified plug_maxima.[ch]:
      - output appropriate coords via new THD_3dind_to_3dmm_no_wod()
      - added new debug output
      - changed default separation to 4 voxels
      - added gr_fac for printing data values in debug mode

08 Mar 2005:
  * modified nifti1_io.[ch], adding global options struct, and optional
      validation in nifti_read_header()
  * modified nifti_tool.c to remove validation of nifti_1_header structs

17 Mar 2005:
  * modified 3dROIstats.c to properly check for failure to use -mask option

21 Mar 2005:
  * updated nifti tree with Kate's changes (to fsliolib, mostly)

22 Mar 2005:
  * removed all tabs from these files:
      - vol2surf.[ch] rickr/3dresample.c rickr/file_tool.[ch]
      - plug_crender.c plug_vol2surf.c
      - rickr/Imon.[ch] rickr/realtime.[ch] rickr/r_idisp.[ch]
      - rickr/r_misc.[ch] rickr/r_new_resam_dset.[ch] rickr/serial_helper.c
      - SUMA/SUMA_3dSurf2Vol.[ch] SUMA/SUMA_3dVol2Surf.[ch]

24 March 2005:
  * modified strblast.c: added -help, -new_char, -new_string, -unescape options

05 April 2005: NIFTI changes also uploaded at sourceforge.net
  * modified nifti/nifti1_io.[ch]
      - added nifti_read_collapsed_image(), an interface for reading partial
        datasets, specifying a subset of array indices
      - for read_collapsed_image, added static functions: rci_read_data(),
        rci_alloc_mem(), and make_pivot_list()
      - added nifti_nim_is_valid() to check for consistency (more to do)
      - added nifti_nim_has_valid_dims() to do many dimensions tests
  * modified nifti/Makefile: removed escaped characters, added USEZLIB defn.
  * modified nifti/niftilib/Makefile: added nifti1_io.o target, for USEZLIB
  * modified nifti/znzlib/Makefile: removed USEZLIB defn.
  * modified nifti/utils/nifti_tool.c: (v 1.5) cannot mod_hdr on gzipped file(s)

06 April 2005:
  * modified thd_niftiread.c to set ADN_datum with any ADN_ntt or ADN_nvals
  * modified edt_dsetitems.c to init new brick types to that of sub-brick 0,
      in the case where a type array is not provided

08 April 2005:
  * modified nifti_tool.[ch]  (-> v1.6)
      - added -cbl: 'copy brick list' dataset copy functionality
      - added -ccd: 'copy collapsed dimensions' dataset copy functionality
      - added -disp_ts: 'disp time series' data display functionality
      - moved raw data display to disp_raw_data()
  * modified nifti1_io.[ch] (-> v1.7)
      - added nifti_update_dims_from_array() - to update dimensions
      - modified nifti_makehdrname() and nifti_makeimgname():
          if prefix has a valid extension, use it (else make one up)
      - added nifti_get_intlist - for making an array of ints
      - fixed init of NBL->bsize in nifti_alloc_NBL_mem()  {thanks, Bob}
  * modified thd_niftiread.c, thd_writedset.c and afni_pplug_env.c to use
      the environment variable AFNI_NIFTI_DEBUG
  * modified README.environment for AFNI_NIFTI_DEBUG

14 April 2005:
  * modified nifti/Makefile: mention 'docs' dir, not 'doc'
  * modified nifti/utils/Makefile: added -Wall to nifti_tool build command
  * modified nifti/niftilib/nifti1.h: doxygen comments for extension fields
  * modified nifti/niftilib/nifti1.[ch] (-> v1.8)
      - added nifti_set_type_from_names(), for nifti_set_filenames()
        (only updates type if number of files does not match it)
      - added is_valid_nifti_type(), just to be sure
      - updated description of nifti_read_collapsed_image() for *data change
        (if *data is already set, assume memory exists for results)
      - modified rci_alloc_mem() to allocate only if *data is NULL
  * modified nt_opts in nifti/utils/niftitool.h: ccd->cci, dts_lines->dci_lines,
      ccd_dims->ci_dims, and added dci (for display collapsed image)
  * modified nifti/utils/niftitool.[ch] (-> v1.7)
     - added -dci: 'display collapsed image' functionality
     - modified -dts to use -dci
     - modified and updated the help in use_full()
     - changed copy_collapsed_dims to copy_collapsed_image, etc.
     - fixed problem in disp_raw_data() for printing NT_DT_CHAR_PTR
     - modified act_disp_ci():
         o was act_disp_ts(), now displays arbitrary collapsed image data
         o added missed debug filename act_disp_ci()
         o can now save free() of data pointer for end of file loop
     - modified disp_raw_data()
         o takes a flag for whether to print newline
         o trailing spaces and zeros are removed from printing floats
     - added clear_float_zeros(), to remove trailing zeros

19 April 2005:
  * modified nifti1_io.[ch] (-> v1.9) :
      - added extension codes NIFTI_ECODE_COMMENT and NIFTI_ECODE_XCEDE
      - added nifti_type codes NIFTI_MAX_ECODE and NIFTI_MAX_FTYPE
      - added nifti_add_extension() {exported}
      - added nifti_fill_extension() as a static function
      - added nifti_is_valid_ecode() {exported}
      - nifti_type values are now NIFTI_FTYPE_* file codes
      - in nifti_read_extensions(), decrement 'remain' by extender size, 4
      - in nifti_set_iname_offset(), case 1, update if offset differs
      - only output '-d writing nifti file' if debug > 1
  * modified nifti_tool.[ch] (-> v1.8) :
      - added int_list struct, and keep_hist, etypes & command fields to nt_opts
      - added -add_comment_ext action
      - allowed for removal of multiple extensions, including option of ALL
      - added -keep_hist option, to store the command as a COMMENT extension
        (includes fill_cmd_string() and add_int(), is done for all actions)
      - added remove_ext_list(), for removing a list of extensions by indices
      - added -strip action, to strip all extensions and descrip fields

28 April 2005:
  * checked in Kate's changes to fsl_api_driver.c and fslio.[ch]

30 April 2005:
  * modified whereami.c so that it does not crash on missing TTatlas+tlrc

05 May 2005:
  * modified nifti1_io.h: fixed NIFTI_FTYPE_ASCII (should be 3, not 2)
  * modified nifti1_io.c: to incorporate Bob's new NIFTI_SLICE_ALT_INC2
      and NIFTI_SLICE_ALT_DEC2 codes from nifti1.h

06 May 2005: Dimon (v 0.1)
  * added files for Dimon: Dimon.[ch], dimon_afni.c, l_mri_dicom_hdr.c
  * modified Imon.[ch], l_mcw_glob.[ch], rickr/Makefile, Makefile.INCLUDE
  * mostly as a check-in for now, details and updates to follow

10 May 2005: nifti fix, Dimon (v 0.2)
  * added Kate's real_easy/nifti1_read_write.c to AFNI CVS
  * modified znzlib.c, using gzseek() for the failing gzrewind()
  * modified nifti_io.c, opening in compressed mode only on '.gz'
  * modified to3d.c: fixed help on 'seqplus' and 'seqminus'
  * modified plug_realtime.c to handle TPATTERN command for slice timing
  * modified Dimon.c, Imon.h: added pause option to opts struct
  * modified realtime.c to set TPATTERN from opts.sp (for now)
  * modified Makefile.INCLUDE, rickr/Makefile: Dimon depends on Imon.h

17 May 2005: Dimon update (v 0.3)
  * modified Dimon.c:
      - added -infile_pattern for glob option
      - set ftype based on usage
  * modified Imon.c, setting ftype
  * modified Imon.h, adding IFM_IM_FTYPE_* codes
  * modified rickr/realtime.c, base XYZFIRST on ftype, added orient_side_rai()
  * modified plug_realtime.c, adding REG_strings_ENV (' ' -> '_')

18 May 2005: Dimon (v 0.4)
  * update complete_orients_str() for IFM_IM_FTYPE_DICOM

02 June 2005: 3dVol2Surf (v 6.4)
  * added -skip_col_non_results option

08 June 2005: added 2 million to LBUF in mri_read.c (will revisit)

10 June 2005: minor updates to plug_roiedit.[ch]

22 June 2005: Dimon (v 0.5)
  * modified Dimon.c: added -infile_prefix option and allowed single volume run
  * modified mri_dicom_hdr.c, rickr/l_mri_dicom_hdr.c: fixed small memory leak

23 June 2005:
  * modified Makefile.linux_gcc32 and Makefile.linux_gcc33_64
      - removed -DNO_GAMMA (was just a warning, but is an error on FC4)

29 June 2005:
  * modified nifti1_io.[ch]: changed NIFTI_ECODE_UNKNOWN to _IGNORE

30 June 2005: Dimon (v 0.6)
  * modified Dimon.c to process run of single-slice volumes
  * modified afni_splash.c and afni_version.c to compile under cygwin

05 July 2005: Dimon (v 1.0 initial release!), Imon (v 3.4)
  * modified Dimon.c (-> v 0.7), Imon.[ch]: removed all tabs
  * modified Dimon.c: updated -help
  * modified Makefile.INCLUDE: include Dimon as part of automatic build

07 July 2005:
  * modified rickr/Makefile and Makefile.INCLUDE for the Dimon build
    on solaris machines with gcc

13 July 2005: Dimon (v 1.1)
  * modified rickr/Dimon.c to handle a run of only 1 or 2 slices, total

22 July 2005:
  * modified 3dANOVA2.c 3dANOVA.c 3dclust.c 3dIntracranial.c 3dNotes.c
      - Peggy updated the -help output
  * modified 3ddelay.c: check for NULL strings before printing
  * modified realtime.c (and Dimon.c ->v1.2) to use IOCHAN_CLOSENOW()

25 July 2005:
  * modified Dimon.c (-> v1.3): explicit connection close on ctrl-c

27 July 2005:
  * submitted Peggy's 3dcalc.c updates (for help)

01 August 2005: Dimon 2.0
  * modified Dimon.c, dimon_afni.c, Imon.c
      - added the option '-dicom_org' to organize DICOM files before any
        other processing
      - enabled '-GERT_Reco2', to create a script to build AFNI datasets

02 August 2005:
  * modified 3dANOVA2.c, updated calculation of sums of squares for all
      a contrasts (including amean and adiff) [rickr, gangc]

03 August 2005:
  * modified 3dresample.c, r_new_resam_dset.c, to allow dxyz to override
      those from a master (if both are supplied)

17 August 2005: (niftilib -> v1.12)
  * incorporated Kate's niftilib-0.2 packaging (v1.11)
  * updated comments on most functions, added nifti_type_and_names_match()

22 August 2005:
  * modified to3d.c:T3D_set_dependent_geometries, case of not IRREGULAR:
      only use fov if nx == ny

23 August 2005: (Dimon -> v2.1)
  * added option -sort_by_num_suffix (for Jerzy)
  * output TR (instead of 0) in GERT_Reco script (for Peggy)

24 August 2005:
  * modified 3dRegAna.c: check for proper ':' usage in -model parameters

25 August 2005: nifti changes for Insight
      (nifti_tool -> v1.9, niftilib -> v1.13)
  * added CMakeLists.txt in every directory (Hans)
  * added Testing/niftilib/nifti_test.c (Hans)
  * removed tabs from all *.[ch] files
  * modified many Makefiles for SGI test and RANLIB (Hans)
  * added appropriate const qualifiers for funtion param pointers to const data
  * modifed nifti_io.c, nifti1_test.c, nifti_tool.c, reducing constant
    strings below 509 bytes in length (-hist, -help strings)
  * modified nifti_stats.c: replaced strdup with malloc/strcpy for warning

29 Auguest 2005: Dimon (-> v 2.2): added options -rev_org_dir and -rev_sort_dir

01 September 2005:
  * modified 3dANOVA2.c (to properly handle multiple samples)
  * modified Dimon.c/Imon.h (Dimon -> v2.3): added option -tr

13 September 2005:
  * modified edt_emptycopy.c, editvol.h
      - added functions okay_to_add_markers() and create_empty_marker_set()
  * modified 3drefit.c
      - moved marker test and creation to said functions in edt_emptycopy.c
  * modified plug_realtime.c: add empty markers to appropriate datasets

20 September 2005: modified 2dImReg.c to return 0 from main

26 September 2005:
  * modified 3dANOVA3.c: applied formulas provided by Gang for variance
    computations of type 4 and 5, A and B contrasts (including means, diffs
    and contrs)

04 October 2005:
  * checking in changes by Hans Johnson
      - added new files Clibs/DartConfig.cmake
      - updates to Testing/niftilib/niti_test.c (this is not all ANSI C - fix)
      - znzlib.c: cast away const for call to gzwrite
      - nifti1_io.c: comment nifti_valid_filename
                     added nifti_is_complete_filename
                     added 2 free()s in nifti_findhdrname
                     cast away const in call to znzwrite
                     fixed error in QSTR() defn (intent_name[ml]=0 -> nam[ml]=0)

11 October 2005:
  * added program 3dmaxima, with files 3dmaxima.c and maxima.[ch]
  * plug_maxima.so is now built from plug_maxima.c and maxima.[ch]
  * modified Makefile.INCLUDE, adding 3dmaxima to PROGRAMS, adding a
        3dmaxima target, and a plug_maxima.so target (for maxima.o)

17 October 2005:
  * modified 3dANOVA3.c - added -aBcontr and -Abcontr as 2nd order contrasts
  * modified 3dANOVA.h, 3dANOVA.lib - added and initialized appropriate fields

27 October 2005:
  * modified 3dANOVA3.c
      - fixed -help typo, num_Abcontr assignment and df in calc_Abc()

28 October 2005:
  * niftilib update: merged updates by Hans Johnson
      - nifti1_io.c:nifti_convert_nhdr2nim : use nifti_set_filenames()
      - updated Testing/niftilib/nifti_test.c with more tests

02 November 2005:
  * modified nifti1_io.[ch]: added skip_blank_ext to nifti_global_options
      - if skip_blank_ext and no extensions, do not read/write extender

04 November 2005:
  * modified SUMA_Surface_IO.c:SUMA_2Prefix2SurfaceName() to return NOPE in
        exists if exist1 and exist2 are false

07 November 2005:
  * checked in rhammett's mri_read_dicom.c changes for the Siemens mosaic format

10 November 2005:
  * modified mri_dicom_hdr.h
      - In defining LONG_WORD, it was assumed that long was 4 bytes, but this
        is not true on 64-bit solaris.  Since it was figured out in defining
        U32, just use that type for l.

18 November 2005:
  * modified nifti1_io.c (nifti_hist -> v1.16):
      - removed any test or access of dim[i], i>dim[0]
        (except setting them to 0 upon reading, so garbage does not propagate)
      - do not set pixdim for collapsed dims to 1.0, leave them
      - added magic and dim[i] tests in nifti_hdr_looks_good()
  * modified nifti_tool.[ch] (-> v1.10)
      - added check_hdr and check_nim action options
  * checked in some of Hans' changes (with some alterations)
      - a few casts in fslio.[ch] and nifti1_io.[ch] and () in nifti_stats.c

22 November 2005:
  * modified 3dANOVA3.c, 3dANOVA.h, 3dANOVA.lib
      - added -old_method option for using the a/bmeans, a/bdiff, a/bcontr
        computations that assume sphericity (not yet documented)

23 November 2005:
  * modified 3dANOVA2.c, added -old_method for type 3 ameans, adiff, acontr

25 November 2005: modified 3dANOVA.c, added subject ethel to -help example

29 November 2005: modified 3dROIstats.c, added more help with examples

02 December 2005:
  * modified 3dANOVA3.c, 3dANOVA2.c, 3dANOVA.h, 3dANOVA.lib
      - Note updates at the web site defined by ANOVA_MODS_LINK
      - The -old_method option requires -OK.
      - Added the -assume_sph option and a check for validity of the contrasts.
      - Contrasts are verified via contrasts_are_valid().
  * fixed Makefile.INCLUDE (had extra '\' at end of PROGRAMS)

08 December 2005: modified 3dRegAna.c, setting default workmem to 750 (MB)

09 December 2005:
  * modified 3dANOVA.c
      - modified contrast t-stat computations (per Gang)
      - added -old_method, -OK, -assume_sph and -debug options
  * modified 3dANOVA.h, added debug field to anova_options
  * modified 3dANOVA.lib
      - no models to check for level 1 in old_method_applies()
      - option -OK is insufficient by itself

14 Dec 2005:
  * modified edt_coerce.c, added EDIT_convert_dtype() and is_integral_data()
  * modified 3dttest.c
      - process entire volume at once, not in multiple pieces
      - added -voxel option (similar to the 3dANOVA progs)
      - replaced scaling work with EDIT_convert_dtype() call

15 Dec 2005: modified 3dhistog.c: fixed use of sub-brick factors

16 Dec 2005: modified 3dUniformize.c: fixed upper_limit==0 case in resample()

28 Dec 2005:
  * modified 3dANOVA3.c
      - Added -aBdiff, -Abdiff and -abmean options and routines.
      - Replaced calc_mean_sum2_acontr() with calc_type4_acontr(), to
          avoid intermediate storage of data as floats (by absorbing the
          calculate_t_from_sums() operation).
      - Similarly, replaced calc_mean_sum2_bcontr() with calc_type4_bcontr().
      - Removed calculate_t_from_sums().
      - Do checks against EPSILON before sqrt(), in case < 0.
  * modified 3dANOVA.h, adding aBdiff, Abdiff and abmean fields to
      anova_options struct, along with the ANOVA_BOUND() macro.
  * modified 3dANOVA.lib, to init aBdiff, Abdiff and abmean struct members.

29 Dec 2005:
  * modified Dimon.c
      - make any IMAGE_LOCATION/SLICE_LOCATION difference only a warning
  * modified Makefile.INCLUDE (for cygwin)
      - removed plug_maxima.fixed from PLUGIN_FIXED
      - added Dimon.exe target
  * modified fixed_plugins.h: removed plugin_maxima from file

04 Jan 2006:
  * modified 3dANOVA2.c, replaced calc_sum_sum2_acontr and calc_t_from_sums
        with calc_type3_acontr, to avoid intermediate storage of data as floats

06 Jan 2006: modified waver.c: only output version info with new '-ver' option

25 Jan 2006:
  * added model_michaelis_menton.c model function for Jasmin Salloum
  * modified NLfit_model.h, added NL_get_aux_filename and NL_get_aux_val protos
  * modified 3dNLfim.c, added options -aux_name, -aux_fval and -voxel_count.
  * modified Makefile.INCLUDE, added model_michaelis_menton to models target
  * modified mri_read.1D: mri_read_ascii to allow 1x1 image file

30 Jan 2006:
  * modified model_michaelis_menton.c to get aux info via environment vars
    (AFNI_MM_MODEL_RATE_FILE and AFNI_MM_MODEL_DT)
  * modified NLfit_model.h and 3dNLfim.c, removing -aux_ options and code

31 Jan 2006:
  * modified 3dANOVA3.c, actually assign df_prod, and fix label for aBdiff
  * modified nifti_tool.c, check for new vox_offset in act_mod_hdrs
  * modified afni_plugout.c, applied (modified) changes by Judd Storrs to
      override BASE_TCP_CONTROL with environment variable AFNI_PLUGOUT_TCP_BASE
  * modified README.environment for description of AFNI_PLUGOUT_TCP_BASE

02 Feb 2006: submitted version 7 of plug_permtest.c for Matthew Belmonte

07 Feb 2006: added -datum option to 3dWavelets.c

09 Feb 2006: added example to 3dANOVA3 -help

02 Mar 2006:
  * modified nifti_tool.c (v 1.12) to deal with nt = 0 in act_cbl(), due
    to change of leaving nt..nw as 0 (potentially) in nifti1_io.c
  * modified nifti1_io.c (v 1.18) to deal with nt = 0 in nifti_alloc_NBL_mem()
  * modified thd_niftiread.c to be sure that ntt and nbuc are at least 1

09 Mar 2006: modified waver.c not to show -help after command typo

13 Mar 2006:
  * added examples to 3dmaskave.c
  * modified to3d.c, mri_read_dicom.c, mrilib.h: added option and global
    variable for assume_dicom_mosaic, to apply the liberal DICOM mosaic test
    only when set (via -assume_dicom_mosaic in to3d)

23 Mar 2006: modified 3dcalc.c: do not scale shorts if values are {0,1}

27 Mar 2006:
  * modified model_michaelis_menton.c to handle '-time' option to 3dNLfim
  * modified 3dmaskdump.c: to keep -quiet quiet

28 Mar 2006: modified vol2surf.c, plug_vol2surf.c: fixed mode computation

28 Mar 2006: modified model_michaelis_menton.c: added mag(nitude) parameter

04 Apr 2006:
  * modified cox_render.c
      - CREN_set_rgbmap(): if ncol>128, still apply 128 colors
      - in BECLEVER sections, enclose bitwise and operations in
        parentheses, as '!=' has higher priority than '&'
  * modified plug_crender.c
      - RCREND_reload_func_dset(): use 128 instead of NPANE_BIG to set
          bdelta, and to apply RANGE to bindex (127)
      - rd_disp_color_info(), don't assume 128 colors, use NPANE_BIG

06 Apr 2006:
  * modified thd_writenifti.c
      - in THD_write_nifti, catch populate_nifti_image() failure (crashed)
      - in populate_nifti_image(), return NULL (not 0) on failure, and give
          a hint for what to do in case of 'brick factors not consistent'

11 Apr 2006: put 'd' in 3dANOVA* 'Changes have been made' warnings

14 Apr 2006: applied mastery to NIFTI-1 datasets
  * modified thd_mastery.c:
      - allow NIfTI suffices in THD_open_dataset()
      - if NIfTI, init master_bot and master_top
  * modified thd_load_datablock.c: broke sub-ranging out to
      THD_apply_master_subrange() for use in THD_load_nifti()
  * modified 3ddata.h: added prototype for THD_apply_master_subrange()
  * modified thd_niftiread.c: THD_load_nifti():
      - if mastered, pass nvals and master_ival to nifti_image_read_bricks()
      - at end, if mastered and bot <= top, THD_apply_master_subrange()

18 Apr 2006:
  * modified cs_addto_args.c: addto_args(), terminate empty sin before strcat
  * modified mri_matrix.c: moved var defn to start of block in DECODE_VALUE
  * modified SUMA_3dSkull_Strip.c: 'int code[3]' must be defined at block start

19 Apr 2006: nifticlib-0.3 updates (from Kate Fissell)
  * added Updates.txt
  * modified Makefile, utils/Makefile: removed $(ARCH) and commented SGI lines
  * modified real_easy/nifti1_read_write.c : corrected typos
  * modified 3dcalc.c: fixed typo in 'step(9-(x-20)*...' example

20 Apr 2006:
  * modified thd_opendset.c: added functions storage_mode_from_filename()
      and has_known_non_afni_extension() {for 3dcopy}
  * modified 3ddata.h: added prototypes for those two functions

21 Apr 2006: modified model_michaelis_menton.c: apply AFNI_MM_MODEL_RATE_IN_SECS

24 Apr 2006: nifti_tool.c:act_disp_ci(): removed time series length check

25 Apr 2006: changed ^M to newline in 3dcopy.c (stupid Macs)

28 Apr 2006: 3dhistog.c: fixed min/max range setting kk outside array

08 May 2006: 3drefit.c: added options -shift_tags, -dxtag, -dytag, -dztag

17 May 2006:
  * modified mri_dicom_hdr.c, rickr/l_mri_dicom_hdr.c
      - make reading of preamble automatic
      - do not print any 'illegal odd length' warnings

18 May 2006: allowed for older DICOM files that do not have a preamble
  * removed rickr/l_mri_dicom_hdr.c
  * modified mri_dicom_hdr.c
      - added FOR_DICOM test to compile for Dicom
      - made g_readpreamble global, set in DCM_OpenFile, used in readFile1
      - DCM_OpenFile reads 132 bytes to check for "DICM" at end
  * modified rickr/Makefile to use ../mri_dicom_hdr.c aot l_mri_dicom_hdr.c

23 May 2006: some nifti updates by Hans Johnson (lib ver 1.19)
  * added Testing/Data, /Testing/niftilib/nifti_test2.c and
          Testing/Data/ATestReferenceImageForReadingAndWriting.nii.gz
  * modified CMakeLists.txt, niftilib/CMakeLists.txt, utils/CMakeLists.txt,
             Testing/niftilib/nifti_test.c
  * modified utils/nifti_stats.c: added switch for not building unused funcs
  * modified nifti1_io.c:
      - nifti_write_ascii_image(): free(hstr)
      - nifti_copy_extensions(): clear num_ext, ext_list
  * rickr: use NULL when clearing ext_list

26 May 2006: .niml/.niml.dset preparations
  * modified 3ddata.h
      - #define STORAGE_BY_NIML,NI_SURF_DSET, and update LAST_STORAGE_MODE
      - add nnodes, node_list to THD_datablock struct
      - define DBLK_IS_NIML, DBLK_IS_NI_SURF_DSET, and similar DSET_*
  * modified thd_open_dset.c:
      - THD_open_one_dataset(): added (unfinished) .niml/.niml.dset cases
      - storage_mode_from_filename(): added .niml/.nim.dset cases
  * modified thd_info.c: THD_dataset_info(): added the 2 new STORAGE_MODEs
  * modified thd_loadblk.c: added (unfinished) cases for the new storage modes

30 May 2006:
  * modified 3dclust.c: get a new ID for an output 3dclust dataset
  * modified Imon.h, Dimon.c: added -save_file_list option

15 Jun 2006:
  * modified thd_niftiwrite.c:
      - in populate_nifti_image(), check for including any adjacent zero offset
        in the slice timing pattern
      - added get_slice_timing_pattern() to test for NIFTI_SLICE_* patterns

20 Jun 2006: modified 3drefit.c to handle NIfTI datasets

21 Jun 2006:
  * modified niml.h, niml_util.c: added NI_strdup_len()
  * modified @make_stim_file, adding -zero_based option

27 Jun 2006:
  * modified nifti1_io.c: fixed assign of efirst to match stated logic in
      nifti_findhdrname() (problem found by Atle Bjørnerud)

28 Jun 2006: many changes to handle NIML and NI_SURF_DSET datasets (incomplete)
  * added thd_niml.c
      - top-level THD_open_niml(), THD_load_niml(), THD_write_niml() functions
      - general read_niml_file(), write_niml_file() functions
      - processing NI_SURF_DSET datasets, including THD_ni_surf_dset_to_afni()
  * modified thd_opendset.c
      - added file_extension_list array and find_filename_extension()
      - apply THD_open_niml() to NIML and NI_SURF_DSET cases
  * modified 3ddata.h - added prototypes
  * modified edt_dset_items.c: added DSET_IS_NIML, DSET_IS_NI_SURF_DSET cases
      for new_prefix editing
  * modified thd_3Ddset.c
      - broke THD_open_3D() into read_niml_file() and THD_niml_3D_to_dataset()
  * modified thd_fetchdset.c
      - added NIML and NI_SURF_DSET cases
  * modified thd_loaddblk.c
      - opened STORAGE_BY_NIML and BY_NI_SURF_DSET cases
  * modified thd_nimlatr.c: just for unused variables
  * modified thd_writedset.c:
      - added NIML and NI_SURF_DSET write cases using THD_write_niml()
  * modified Makefile.INCLUDE: to add thd_niml.o to THD_OBJS

30 Jun 2006:
  * modified SUMA_3dVol2Surf.[ch], vol2surf.[ch], plug_vol2surf.c
      - added -save_seg_coords option

07 Jul 2006:
  * modified @auto_tlrc: changed 3dMax to 3dBrickStat

11 Jul 2006:
  * modified niml_element.c: fixed use of NI_realloc() in NI_search_group_*()
  * modified 3ddata.h: changed THD_write_niml to Boolean, added prototypes
        for THD_dset_to_ni_surf_dset() and THD_add_sparse_data()
  * modified thd_writedset.c:
      - don't let NI_SURF_DSET get written as 1D
  * modified thd_niml.c:
      - added load for NI_SURF_DSET
      - added basic write for NI_SURF_DSET

12 Jul 2006:
  * modified edt_emptycopy.c: init nnodes and node_list
  * modified thd_auxdata.c: for NI_SURF_DSET, copy any nnodes and node_list
  * modified thd_delete.c: free dblk->node_list
  * modified thd_niml.c: (maybe good enough for SUMA now)
      - write out more attributes
      - use matching XtMalloc for dblk allocation
  * modified 3ddata.h: added THD_[un]zblock_ch protos
  * modified thd_zblock.c: added THD_zblock_ch() and THD_unzblock_ch()

14 Jul 2006:
  * modified 3ddata.h: added IS_VALID_NON_AFNI_DSET() macro
  * modified 3dNotes.c:
      - replaced if(DSET_IS_NIFTI()) with if(IS_VALID_NON_AFNI_DSET())
  * modified 3drefit.c: same change as 3dNotes.c
  * modified 3dniml.c: pass dset to nsd_add_sparse_data for nx
  * modified niml_element.c, niml.h
      - added NI_set_ni_type_atr() and calls to it from NI_add_column() and
        stride(), so if ni_type is already set, it is adjusted

17 Jul 2006:
  * modified niml_element.c, niml.h:
      - fixed NI_set_ni_type_atr() to allow for type name of arbitrary length,
        not just known types
      - added NI_free_element_data()
  * modified thd_niml.c: added nsd_add_colms_type, to add column types for suma

18 Jul 2006:
  * modified thd_niml.c:
      - added COLMS_RANGE attribute element
      - use AFNI_NI_DEBUG as an integer level (as usual)

28 Jul 2006:
  * modified 3drefit.c: fixed saveatr use, and blocked atr mods with other mods

03 Aug 2006: updates for writing niml files as NI_SURF_DSET
  * modified thd_niml.c:
      - added ni_globals struct to deal with debug and write_mode
      - added and used set_ni_globs_from_env() for assigning those globals
      - added set_sparse_data_attribs()
      - added set/get access functions for globals debug and write_mode
  * modified 3dvol2surf.c
      - added v2s_write_outfile_NSD(), and use it instead of _niml()
      - remove unused code in dump_surf_3dt()
      - added static set_output_labels()
      - allocate labels array in alloc_output_mem()
      - free labels and labels array in free_v2s_results()
  * modified 3dvol2surf.h: added labels and nlab to v2s_results struct
  * modified SUMA_3dVol2Surf.c: use v2s_write_outfile_NSD, instead of _niml()
  * modified 3ddata.h
      - added protos for ni_globs functions and set_sparse_data_attribs()

04 Aug 2006: auto-convert NI_SURF_DSET to floats
  * modified thd_niml.c:
      - added to_float to ni_globals struct, for blocking conversion to floats
      - changed  LOC_GET_MIN_MAX_POSN to NOTYPE_GET_MIN_MAX_POSN
      - added get_blk_min_max_posn(), to deal with varrying input types
      - fixed missing output column type when no node_list
      - nsd_add_colms_range(): do not require input type to be float
      - nsd_add_sparse_data(): convert output to floats, if necessary
      - add gni.to_float accessor funcs and in set_ni_globs_from_env()
  * modified vol2surf.c: changed set_ni_debug() to set_gni_debug()
  * modified 3ddata.h: changed prototype names, and added to_float protos

08 Aug 2006: C++ compilation changes from Greg Balls
  * modified niml/niml.h rickr/r_idisp.h rickr/r_new_resam_dset.h 3ddata.h
        afni_environ.h afni_graph.h afni.h afni_pcor.h afni_setup.h afni_suma.h
        afni_warp.h bbox.h cdflib.h coxplot.h cox_render.h cs.h debugtrace.h
        display.h editvol.h imseq.h machdep.c machdep.h maxima.h mcw_glob.h
        mcw_graf.h mcw_malloc.h mrilib.h mri_render.h multivector.h parser.h
        pbar.h plug_permtest.c plug_retroicor.c retroicor.h thd_compress.h
        thd_iochan.h thd_maker.h vol2surf.h xim.h xutil.h
    (mostly adding #ifdef __cplusplus extern "C" { #endif, and closing set)
  * modified rickr/Makefile: removed ../ge4_header.o from file_tool dep list

09 Aug 2006: vol2surf creation of command history
  * modified vol2surf.c:
      - create argc, argv from options in v2s_make_command()
      - added loc_add_2_list() and v2s_free_cmd() for v2s_make_command()
      - added labels, thres index/value and surf vol dset to gv2s_plug_opts
  * modified vol2surf.h:
      - added v2s_cmt_t sturct, and into v2s_opts_t
      - added gpt_index/thresh, label adn sv_dset to v2s_plugin_opts
  * modified afni_niml.c: receive spec file name via surface_specfile_name atr
  * modified afni_suma.h: added spec_file to SUMA_surface
  * modified afni_vol2surf.c:
      - store the surface volume dataset in the v2s_plugin_opts struct
      - also store the index and threshold value of the threshold sub-brick
  * modified plug_vol2surf.c:
      - init surface labels, and set them given the user options
  * modified SUMA_3dVol2Surf.c:
      - store command-line arguments for history note
      - added -skip_col_NSD_format option
  * modified SUMA_3dVol2Surf.h:
      - added argc, argv to set_smap_opts parameters
  * modified edt_empty_copy.c: if they exist, let okay_to_add_markers() return 1

14 Aug 2006:
  * modified mri_dicom_hdr.c, setting the type for g_readpreamble
  * modified rickr/Makefile: remove dependancies on an ../*.o, so they are
        not removed by the build process (they should not be made from rickr)

15 Aug 2006: added Makefile.linux_xorg7

17 Aug 2006:
  * modified thd_niml.c: fixed str in loc_append_vals()
  * modified Makefile.linux_xorg7: set SUMA_GLIB_VER = -2.0
  * modified Makefile.INCLUDE, passed any SUMA_GLIB_VER to SUMA builds
  * modified SUMA_Makefile_NoDev, link glib via -lglib${SUMA_GLIB_VER}
  * modified Vecwarp.c: fixed matrix-vector screen output (req. by Tom Holroyd)

18 Aug 2006: modified 3dmaxima.c, maxima.[ch]: added -coords_only option

23 Aug 2006:
  * modified thd_niml.c:
      - added sorted_node_def attr to SPARSE_DATA
      - if set_sparse_data_attribs(), if nodes_from_dset, the set
          sorted_node_def based on node_list in dset via has_sorted_node_list()
  * modified vol2surf.c:
      - use -outcols_afni_NSD in v2s_make_command
      - in v2s_write_outfile_NSD(), only output node list if it exists
      - pass 0 as nodes_from_dset to has_sorted_node_list()
  * modified SUMA_3dVol2Surf.c (-> v6.7)
      - changed -skip_col_* options to -outcols_* options
      - added -outcols_afni_NSD option
  * modified 3ddata.h: added nodes_from_dset to set_sparse_data_attribs()

23 Aug 2006: do not assume node index column is #0 in NI_SURF_DSET
  * modified thd_niml.c:
      - added sum_ngr_get_node_column()
      - apply in process_ni_sd_sparse_data(), process_ni_sd_attrs() and
          THD_add_sparse_data
  * modified vol2surf.c: possibly free nodes during write_NSD
  * modified 3ddata.h: added suma_ngr_get_node_column() prototype

25 Aug 2006:
  * modified thd_niml.c
      - added node_col to ni_globals, and hold the niml node column index
      - modified nsd_string_atr_to_slist() to skip a given index
      - apply node_col to process_ni_sparse_data() and process_ni_sd_attrs()
      - modifed THD_add_sparse_data() to omit gni.node_col, not col 0

30 Aug 2006: INDEX_LIST is now a separate attribute element in the group
  * modified thd_niml.c:
      - updated nsd_string_atr_to_slist(), THD_ni_surf_dset_to_afni(),
          process_NSD_sparse_data(), process_NSD_attrs(), THD_add_sparse_data(),
          THD_dset_to_ni_surf_dset(), nsd_add_colms_type(),
          nsd_add_str_atr_to_group(), nsd_add_colms_range(),
          nsd_add_sparse_data(), set_sparse_data_attribs()
      - added process_NSD_index_list(), to read INDEX_LIST attribute element
      - added nsd_fill_index_list(), to create the INDEX_LIST element
      - added NI_get_byte_order(), to process "ni_form" attribute
      - removed suma_ngr_get_node_column()
      - removed node_col from ni_globals
      - modified new nsd_fill_index_list():
          add default list when AFNI_NSD_ADD_NODES is set
  * modified 3ddata.h: lose suma_ngr_get_node_column(), add NI_get_byte_order()
  * added Makefile.linux_xorg7_64 (compiled on radagast)

05 Sep 2006:
  * modified nifti1_io.c, added nifti_set_skip_blank_ext
  * merged many fslio NIfTI changes by Kate Fissell for niftilib-0.4
  * modified thd_niml.c: removed warning about missing Node_Index column type

06 Sep 2006:
  * modified vol2surf.c: use NI_free after NI_search_group_shallow
  * modified thd_niml.c: in nsd_add_str_atr_to_group() swap out nul chars
  * merged small nifti/README change

12 Sep 2006: modified thd_niml.c:THD_open_niml(): set brick_name to fname
15 Sep 2006: modified 3ddata.h: added AFNI_vedit_clear proto (for Greg Balls)
28 Sep 2006:
  * modified thd_niml.c:close niml stream in write_niml_file()
  * modified afni_niml.c:no AFNI_finalize_dataset_CB from process_NIML_SUMA_ixyz

12 Oct 2006: added serial_writer.c program for rasmus
16 Oct 2006: modified serial_writer.c: added -ms_sleep, -nblocks and -swap
22 Oct 2006: added model_demri_3.c
23 Oct 2006: modified model_demri_3.c with DGlen, 2 speed-ups, 1 negation fix
24 Oct 2006: modified model_demri_3.c: mpc was not stored across iterations
25 Oct 2006:
  * modified 3dNLfim.c:
      - updated check of proc_shmptr
      - change on of the TR uses to TF (computing ct from cp)
      - expanded help, and added a sample script
26 Oct 2006:
  * modified Makefile.INCLUDE: added $(LFLAGS) to 3dNLfim target
  * modified 3dNLfim.c: limit g_voxel_count output to every 10th voxel
                      : RIB and RIT replace single R

30 Oct 2006: modified afni_base.py: added comopt.required, show(), print mods
31 Oct 2006: modified model_demri_3.c: allow ve param, instead of k_ep
02 Nov 2006: modified model_demri_3.c: change init, so Ve is reported as output

13 Nov 2006: changes to send ROI means to serial_helper (from Tom Ross)
  * modified plug_realtime.c:
      - added Mask dataset input to plugin interface
      - if Mask, send averages over each ROI to AFNI_REALTIME_MP_HOST_PORT
  * modified serial_helper.c:
      - added -num_extras option to process extra floats per TR (ROI aves)
  * modified thd_makemask.c:
      - added thd_mask_from_brick() from vol2surf.c
      - added new thd_multi_mask_from_brick()
  * modified 3dvol2surf.c: moved thd_mask_from_brick() to thd_makemask.c
  * modified 3ddata.h: added protos for mask functions

15 Nov 2006: modified serial_helper.c: encode nex into handshake byte

17 Nov 2006:
  * modifed model_demri_3.c:
      - do not exit on fatal errors, complain and return zero'd data
      - if model parameters are bad (esp. computed), zero data and return
      - if thd_floatscan() on results shows bad floats, zero data and return
      - require AFNI_MODEL_D3_R1I_DSET to be float
      - removed R1I_data_im
  * added afni_util.py, afni_proc.py, make.stim.times.py,
        and option_list.py to python_scripts
  * modified afni_base.py

18 Nov 2006: small mods to afni_base.py, afni_proc.py, option_list.py

20 Nov 2006:
  * modified Dimon.c, dimon_afni.c, Imon.h:
      - added -epsilon option for difference tests, including in dimon_afni.c

01 Dec 2006: python updates
  * added db_mod.py: contains datablock modification functions (may disappear)
  * modified afni_base.py:
      - added afni_name:rpv() - to return the relative path, if possible
      - added read_attribute(), which calls 3dAttribute on a dataset
  * modified afni_proc.py: now does most of the pre-processing
  * modified option_list.py:
      - added setpar parameter to OptionList:add_opt()
      - updated comments

02 Dec 2006: modified suma_datasets.c:
      - SUMA_iswordin -> strstr, MAXPATHLEN -> SUMA_MAX_DIR_LENGTH

07 Dec 2006: minor mods to afni_util.py, db_mod.py, make.stim.times.py

09 Dec 2006: 3dDeconvolve command in afni_proc.py
  * modified make.stim.times.py, afni_util.py, afni_proc.py, db_mod.py

10 Dec 2006: modified afni_proc.py, db_mod.py: help and other updates

11 Dec 2006: more uber-script updates
  * modified afni_proc.py: added version, history and complete help
  * modified db_mod.py: volreg_base_ind now takes run number, not dset index
  * modified make_stim_times.py:
      - renamed from make.stim.times.py
      - more help
      - per output file, append '*' if first stim row has only 1 stim 
  * modified vol2surf.[ch]: if plug_v2s:debug > 2, print 3dV2S command

12 Dec 2006:
  * modified afni_proc.py, db_mod.py, option_list.py:
      - added fitts and iresp options, fixed scale limit

13 Dec 2006:
  * modified afni_proc.py, db_mod.py
      - added -regress_stim_times_offset and -no_proc_command
        (afni_proc commands are stored by default)
  * modified make_stim_times.py: added -offset option

14 Dec 2006:
  * modified afni_proc.py, db_mod.py
      - added -copy_anat, -regress_make_1D_ideal and -regress_opts_3dD
  * modified make_stim_times.py: added required -nt option

15 Dec 2006: modified SUMA_3dVol2surf: help for niml.dest and EPI -> surface
17 Dec 2006: modified afni_proc.py, db_mod.py:
      - added options -tshift_opts_ts, -volreg_opts_vr, -blur_opts_merge
18 Dec 2006: small mods to afni_proc.py, db_mod.py, make_stim_times.py
19 Dec 2006:
  * modified afni_proc.py, db_mod.py: help update, use quotize_list
  * modified afni_util.py: added quotist_list
  * modified make_stim_times.py: use str(%f) for printing

20 Dec 2006: afni_proc.py (version 1.0 - initial release)
  * modified afni_proc.py
      - changed -regress_make_1D_ideal to -regress_make_ideal_sum
      - added output of stim ideals (default) and option -regress_no_ideals
      - verify that AFNI datasets are unique
      - added -regress_no_stim_times
  * modified afni_base.py: added afni_name.pve()
  * modified afni_util.py: added uniq_list_as_dsets, basis_has_known_response
  * modified db_mod.py: for change in 'ideal' options & -regress_no_stim_times
  * added ask_me.py: basically empty, to prompt users for options

21 Dec 2006: afni_proc.py (v1.2)
      - help, start -ask_me, updated when to use -iresp/ideal
22 Dec 2006: modified afni_proc.py, make_stim_times.py for AFNI_data2 times

25 Dec 2006:
  * modified afni_proc.py (v1.4): updates for -ask_me
  * modified ask_me.py: first pass, result matches ED_process
  * modified afni_util.py: added list_to_datasets() and float test
  * small mods to db_mod.py, option_list

27 Dec 2006: afni_proc.py (1.5): ask_me help

28 Dec 2006: afni_proc.py (1.6)
  * modified afni_proc.py: added -gltsym examples
  * modified afni_util.py: added an opt_prefix parameter to quotize_list()
  * modified db_mod.py   : used min(200,a/b*100) in scale block

03 Jan 2007: afni_proc.py (1.7)
  * modified afni_proc.py, afni_util.py, db_mod.py:
      - help updates, no blank '\' line from -gltsym, -copy_anat in examples
  * modified 3dTshift.c: added -no_detrend

04 Jan 2007: modified 3dTshift.c: added warning for -no_detrend and MRI_FOURIER

08 Jan 2007: afni_proc.py (1.8)
  * modified afni_proc.py, db_mod.py:
      - changed default script name to proc.SUBJ_ID, and removed -script from
          most examples
      - added options '-bash', '-copy_files', '-volreg_zpad', '-tlrc_anat',
          '-tlrc_base', '-tlrc_no_ss', '-tlrc_rmode', '-tlrc_suffix'

10 Jan 2007: afni_proc.py (1.9) added aligned line wrapping
  * modified afni_proc.py, afni_util.py
      - new functions add_line_wrappers, align_wrappers, insert_wrappers,
                      get_next_indentation, needs_wrapper, find_command_end,
                      num_leading_line_spaces, find_next_space, find_last_space

11 Jan 2007: modified afni_proc.py:
      - subj = $argv[1], added index to -glt_label in -help
      - rename glt contrast files to gltN.txt (so change AFNI_data2 files)

12 Jan 2007: modified afni_proc.py (1.11), db_mod.py:
      - added options -move_preproc_files, -regress_no_motion
      - use $output_dir var in script, and echo version at run-time
      - append .$subj to more output files

16 Jan 2007:
  * modified plug_crender.c: fixed use of Pos with bigmode
  * modified db_mod.py to allow -tlrc_anat without a +view in -copy_anat

17 Jan 2007: modified db_mod.py: -tlrc_anat ==> default of '-tlrc_suffix NONE'

26 Jan 2007: modified afni_base.py, afni_proc.py, afni_util.py, ask_me.py
             db_mod.py, make_stim_times.py, option_list.py
      - changed all True/False uses to 1/0 (for older python versions)
      - afni_proc.py: if only 1 run, warn user, do not use 3dMean

02 Feb 2007:
      - afni_proc.py: put execution command at top of script
      - modified db_mod.py: print blur_size as float
      - modified make_stim_times.py: added -ver, -hist, extra '*' run 1 only

06 Feb 2007: added TTatlas example to 3dcalc help

20 Feb 2007:
  * modified -help of make_stim_times.c (fixing old make.stim.times)
  * modified thd_opendset.c: made fsize unsigned (handles 4.2 GB files, now)

21 Feb 2007: modified afni_proc.py (1.16), db_mod.py
      - added optional 'despike' block
      - added options -do_block and -despike_opts_3dDes
  * updated nifti tree to match that of sourceforge
      - minor changes to CMakeLists.txt DartConfig.cmake
        examples/CMakeLists.txt niftilib/CMakeLists.txt niftilib/nifti1_io.c
        real_easy/nifti1_read_write.c Testing/niftilib/nifti_test.c
        utils/CMakeLists.txt znzlib/CMakeLists.txt znzlib/znzlib.h
  * updated nifti/fsliolib/fslio.c: NULL check from David Akers

23 Feb 2007:
  * modified imseq.c to do tick div in mm for Binder
  * modified README.environment: added AFNI_IMAGE_TICK_DIV_IN_MM variable

27 Feb 2007: afni_proc.py (v 1.17)
  * modified afni_proc.py, db_mod.py, option_list.py:
      -volreg_align_to defaults to 'third' (was 'first')
      -added +orig to despike input
      -added 'empty' block type, for a placeholder

28 Feb 2007: fixed fsize problem in thd_opendset.c (from change to unsigned)

01 Mar 2007:
  * modified README.environment
      - added variables AFNI_NIML_DEBUG, AFNI_NSD_ADD_NODES,
        AFNI_NSD_TO_FLOAT and AFNI_NIML_TEXT_DATA
  * modified thd_niml.c: allowed sub-brick selection via thd_mastery
  * modified thd_mastery.c: init master_bot/top for .niml.dset files

02 Mar 2007:
  * modified count.c
      - added '-sep', same as '-suffix'
      - extended number of strncmp() characters for many options
  * modified option_list.py: if n_exp = -N, then at least N opts are required

05 Mar 2007: per Jason Bacon and Michael Hanke:
  * JB: modified @escape-: added '!' in #!/bin/tcsh
  * JB: modified ask_me.py, db_mod.py, added 'env python', for crazy users
  * MH: added nifti/nifticdf: CMakeLists.txt, Makefile, nifticdf.c, nifticdf.h
        (separating nifti_stats.c into nifticdf.[ch])
  * MH: modified CMakeLists.txt and utils/CMakeLists.txt for cmake
  * MH: modified nifti_stats.c (removed all functions but main)
  * rr: modified Makefile, README, utils/Makefile (to build without cmake)
  * rr: modified Makefile.INCLUDE
        - replace nifti_stats.o with nifticdf.o in CS_OBJS
        - add nifticdf.o target and link to nifti_stats
        - modified nifticdf.[ch]: reverted back closer to nifti_stats.c
          (moving 7 protos to nifticdf.h, and all but main to nifticdf.c)
          (keep all static and __COMPILE_UNUSED_FUNCTIONS__ use)

15 Mar 2007: mod afni_proc.py, db_mod.py: use x1D suffix, removed -full_first
19 Mar 2007: modified afni_proc.py: allow dataset TR stored in depreciated ms
25 Mar 2007: afni_proc.py: added -help for long-existing -regress_no_stim_times
19 Apr 2007: afni_proc.py (v1.21): apply +orig in 1-run mean using 3dcopy
01 May 2007: included Hans' updates: CMakeLists.txt, nifticdf/CMakeLists.txt
03 May 2007:
  * added 3dPAR2AFNI.pl, from Colm G. Connolly
  * modified Makefile.INCLUDE: added 3dPAR2AFNI.pl to SCRIPTS
  * modified afni_proc.py: added BLOCK(5) to the examples
08 May 2007:
  * w/dglen, mod 3dcalc.c, thd_mastery.c to handle long sub-brick lists
  * modified afni_proc.py, db_mod.py, option_list.py (v 1.22)
        - change read_options() to be compatible with python version 2.2
        - '-basis_normall 1' is no longer used by default
        - rename -regress_no_stim_times to -regress_use_stim_files

10 May 2007:
  * modified nifticdf.[ch], mrilib.h, Makefile.INCLUDE, SUMA_Makefile_NoDev
        - use cdflib functions from nifticdf, not from cdflib directory
  * removed cdflib directory and cdflib.h

16 May 2007:
  * modified nifti/Updates.txt in preparation of nifticlib-0.5 release
  * modified nifti1_read_write.c for Kate, to fix comment typo

17 May 2007: nifti update for release 0.5
  * modified Clibs/CMakeLists.txt, set MINOR to 5
  * modified Makefile, examples/Makefile, utils/Makefile to apply ARCH
      variable for easier building

30 May 2007: nifti CMakeList updates from Michael Hanke
01 Jun 2007:
  * modified afni_proc.py, db_mod.py:
        - changed Xmat.1D to X.xmat.1D, apply -xjpeg in 3dDeconvolve
  * modified nifti/Makefile, README for nifticlib-0.5 release

04 Jun 2007:
  * modified nifti1_io.c: noted release 0.5 in history
  * modified nifti_tool.c: added free_opts_mem() to appease valgrind
  * modified afni_proc.py, db_mod.py: added -scale_no_max

05 Jun 2007:
  * modified nifti1_io.c: nifti_add_exten_to_list:
        - revert on failure, free old list
  * modified nifti_tool.c: act_check_hdrs: free(nim)->nifti_image_free()

06 Jun 2007:
  * modified thd_makemask.c: THD_makemask() and THD_makedsetmask()
        - for short and byte datasets, check for empty mask
07 Jun 2007:
  * modified nifti1_io.c: nifti_copy_extensions: use esize-8 for data size
  * modified nifti1.h: note that edata is of length esize-8

08 Jun 2007:
  * modified file_tool.[ch]: added -show_bad_backslash and -show_file_type

11 Jun 2007: updates for new image creation
  * modified nifti1_io.[ch]:
        - added nifti_make_new_header() - to create from dims/dtype
        - added nifti_make_new_nim() - to create from dims/dtype/fill
        - added nifti_is_valid_datatype(), and more debug info
  * modified nifti_tool.[ch]:
        - added nt_image_read, nt_read_header and nt_read_bricks
          to wrap nifti read functions, allowing creation of new datasets
        - added -make_im, -new_dim, -new_datatype and -copy_im
  * modified nifti1_test.c: added trailing nifti_image_free(nim)
  * modified thd_niftiread.c: to allow nx = ny = 1

13 Jun 2007: nifti_tool.c help update, file_tool.c help update
22 Jun 2007: modified linux_xorg7 and _64 Makefiles to link motif statically

27 Jun 2007:
  * modified afni_base.py, afni_proc.py, afni_util.py, db_mod.py:
        - on error, display failed command
  * modifed Makefile_linux_xorg7 and xorg7_64 for static linking in SUMA

28 Jun 2007: minor changes from HJ
  * modified CMakeLists.txt, niftilib/CMakeLists.txt, nifti1_io.c

29 Jun 2007: file_tool can work with ANALYZE headers
  * added fields.[ch], incorportating field lists from nifti_tool
  * modified file_tool.[ch]:
        - added options -def_ana_hdr, -diff_ana_hdrs, -disp_ana_hdr, -hex
  * modified rickr/Makefile: file_tool depends on fields.[ch]
  * modified 3dANOVA.h: set MAX_OBS to 300, for T. Holroyd

30 Jun 2007: modified Makefile.INCLUDE: added svm to afni_src.tgz target

01 Jul 2007:
  * modified fields.[ch]: added add_string (from nifti_tool.c)
  * modified file_tool.[ch]: added ability to modify fields of an ANALYZE file

02 Jul 2007:
  * modified thd_niftiread.c: changed missing xform error to warning
  * modified model_demri_3.c: return on first_time errors
  * modified 3dcopy.c: complain and exit on unknown option

03 Jul 2007: modified model_demri_3.c: allow MP file as row
10 Jul 2007: modified thd_coords.c: moved verbose under fabs(ang_merit)
17 Jul 2007: modified 3dmaxima.c: fixed -n_style_sort option use

18 Jul 2007: first GIFTI files added (v 0.0)
  * added gifti directory and simple Makefile
  * added gifti.[ch]     : main programmer library functions
  * added gifti_xml.[ch] : XML functions, to be called from gifti.c
  * added gtest.[ch]     : a sample test of the library files
  * added get.times      : a script to time reading of gifti images
  * added test.io        : a script to test GIFTI->GIFTI I/O
  * modified Makefile.INCLUDE : to copy gifti directory for afni_src.tgz

19 Jul 2007: modified model_demri_3.c: minor tweak to ct(t) equation
20 Jul 2007: modifed Makefile.INCLUDE, gifti/Makefile, for building gtest

24 Jul 2007:
  * modified rickr/r_new_resam_dset.[ch] rickr/3dresample.c
             SUMA/SUMA_3dSkullStrip.c 3dSpatNorm.c plug_crender.c whereami.c
             rickr/Makefile Makefile.INCLUDE SUMA/SUMA_Makefile_NoDev
        - removed librickr.a (objects go into libmri.a)
        - added get_data param to r_new_resam_dset()

25 Jul 2007:
  * modified svm/3dsvm_common.c to use rint (aot rintf (re: solaris))
  * modified model_demri_3.c: help update

26 Jul 2007:
  * modified Makefile.INCLUDE to not use $< variable
  * modified 3dDeconvolve.c: -stim_times with exactly 0 good times is okay
  * modified svm/plug_3dsvm.c: moved variable definitions to block tops
  * modified Dimon.c, Imon.c: help typos

27 Jul 2007:
  * modified nifti1_io.[ch]: handle 1 vol > 2^31 bytes
  * modified nifti_tool.c: return 0 on -help, -hist, -ver
  * modified thd_niftiwrite.c: replace some all-caps prints

28 Jul 2007:
  * modified nifti1_io.[ch]: handle multiple volumes > 2^32 bytes
  * modified: 1dSEM.c 2dImReg.c 3dDTtoDWI.c  3dDWItoDT.c 3dNLfim.c
              3dStatClust.c 3dTSgen.c 3dUniformize.c RSFgen.c RegAna.c
              plug_nlfit.c matrix.[ch] Makefile.INCLUDE SUMA/SUMA_MiscFunc.c
        - moved matrix.c to libmri.a

30 Jul 2007: nifti updates for regression testing
  * modified Makefile, README, Updates.txt
  * added Testing/Makefile, and under new Testing/nifti.regress_test directory:
        README, @test, @show.diffs, and under new commands directory:
            c01.versions, c02.nt.help, c03.hist, c04.disp.anat0.info,
            c05.mod.hdr, c06.add.ext, c07.cbl.4bricks, c08.dts.19.36.11,
            c09.dts4.compare, c10a.dci.run.210, c10.dci.ts4, c11.add.comment,
            c12.check.comments, c13.check.hdrs, c14.make.dsets, c15.new.files 
  * modified gifti/gtest.c, Makefile: init gfile, add CFLAGS

31 Jul 2007:
  * modified 3dAllineate.c, 3dresample.c, 3dSegment.c, 3dSpatNorm.c,
             afni_plugin.c, Makefile.INCLUDE, mrilib.h, plug_crender.c,
             rickr/3dresample.c, rickr/r_new_resam_dset.c,
             SUMA/SUMA_3dSkullStrip.c, whereami.c
        - included r_new_resam_dset, r_hex_str_to_long, r_idisp_fd_brick
          in forced_loads[]
  * modified r_idisp.[ch]: nuked unused r_idisp_cren_stuff
  * modified nifti/Makefile, Testing/Makefile, Testing/README_regress, and
        renamed to nifti_regress_test, all to remove '.' from dir names
  * modified afni_func.c: watch for overflow in jj if ar_fim is garbage

01 Aug 2007:
  * modified gifti.[ch], gifti_xml.c, gtest.[ch]
        - changed dim0..dim5 to dims[], and nvals to size_t
        - added gifti_init_darray_from_attrs and some validation functions

02 Aug 2007: modified file_tool.[ch]: added -disp_hex, -disp_hex{1,2,4}
06 Aug 2007:
  * modified Makefile.INCLUDE: added targets libmri.so, libmrix.so
  * modified afni_vol2surf.c, afni_func.c: better overflow guards
07 Aug 2007: help update to 3dresample
08 Aug 2007:
  * modified RegAna.c: changed the 4 EPSILON values to 10e-12 (from 10e-5),
        to allow division by smaller sums of errors, to prevent setting
        valid output to zero
  * modified nifti1_io.c: for list, valid_nifti_brick_list requires 3 dims

24 Aug 2007:
  * removed znzlib/config.h
  * incorporated Hans Johnson's changes into nifti tree
27 Aug 2007:
  * modified afni_vol2surf.c, for non-big, include ovc[npanes], Mike B reported
31 Aug 2007:
  * added model_conv_diffgamma.c, for Rasmus
  * modified Makefile.INCLUDE: add model_conv_diffgamma.so to the model list
  * modified mri_read_dicom.c: no more AFNI_DICOM_WINDOW warnings
07 Sep 2007:
  * modified Makefile.linux_xorg7/_64 to work on Fedora7
  * modified model_conv_diffgamma.c: fix diff, never set ts[0] to 1
17 Sep 2007:
  * modified 3dDeconvolve.c: show voxel loop when numjobs > 1
  * modified model_conv_diffgamma.c: allow no scaling, add more debug

20 Sep 2007: modified thd_opendset.c: THD_deconflict_nifti needs to use path

24 Sep 2007:
  * modified 3dbucket.c 3dCM.c 3dnewid.c 3dNotes.c 3drefit.c 3dTcat.c adwarp.c
             afni.c plug_notes.c plug_tag.c readme_env.h thd_writedset.c:
        - changed AFNI_DONT_DECONFLICT to AFNI_DECONFLICT
        - modified default behavior to failure (from deconflict)
  * modified 3dTshift.c: help fix for seqplus/seqminus
  * modified AFNI.afnirc: set hints to YES as default

27 Sep 2007: modified Makefile.INCLUDE: added @DriveAfni/Suma to SCRIPTS
02 Oct 2007: modified AlphaSim.c: added -seed option
03 Oct 2007:
  * modified 3dDeconvolve: use default polort of just 1+floor(rtime/150)
  * modified afni_proc.py, db_mod.py: apply same default polort
04 Oct 2007: modified ccalc.c: quit on ctrl-D, no error on empty line
10 Oct 2007:
  * modified db_mod.py: need math for floor()
  * modified 3dDeconvolve.c: set AFNI_DECONFLICT=OVERWRITE in do_xrestore_stuff
12 Oct 2007: modified thd_niftiread/write.c: get/set nim->toffset
19 Oct 2007: modified mbig.c to handle mallocs above 2 GB
22 Oct 2007:
  * modified Makefile.INCLUDE: added R_scripts dir
  * checked in Gang's current 3dLME.R and io.R scripts
23 Oct 2007:
  * added afni_run_R script, to set AFNI_R_DIR and invoke R
  * modified Makefile.INCLUDE, added afni_run_R to SCRIPTS
  * modified 3dLME.R, to use the AFNI_R_DIR environment variable

25 Oct 2007: modified 3dfractionize.c: added another help example
26 Oct 2007: gifti 0.2
  * renamed gifti.? to gifti_io.?, gtest to gifti_test
  * modified get.times, test.io: applied gifti_test name
  * modified Makefile: applied name changes, added clean: target
  * modified gifti_io.[ch]: prepended 'gifti_' to main data structures
        - MetaData    -> gifti_MetaData,    LabelTable -> gifti_LabelTable,
          CoordSystem -> gifti_CoordSystem, DataArray  -> gifti_DataArray
  * modified gifti_xml.c:
        - added indent level to control structure and fixed logic
        - allowed more CDATA parents (any PCDATA)
        - added ewrite_text_ele and applied ewrite_cdata_ele
  * modified gifti_test.c: changed option to -gifti_ver
29 Oct 2007:
  * modified gifti*.[ch]: changed gifti datastruct prefixes from gifti_ to gii
  * modified test.io: added -nots option, to skip time series
  * added README.gifti
30 Oct 2007: sync nifti from sourceforge (from Michael Hanke)
  * added LICENSE, Makefile.cross_mingw32, packaging/DevPackage.template
  * modified CMakeLists.txt, Updates.txt
08 Nov 2007:
  * modified fslio.c: applied Henke fix for FslFileType
  * modified nifti_io.c: applied Yaroslav fix for ARM alignment problem
  * modified model_demri_3.c: allow for nfirst == 0
09 Nov 2007: modified model_demri_3.c: added AFNI_MODEL_D3_PER_MIN
13 Nov 2007:
  * modified adwarp.c: applied AFNI_DECONFLICT for both overwrite and decon
  * modified SUMA_Load_Surface_Object.c:
        - apply SUMA_SurfaceTypeCode in SUMA_coord_file
14 Nov 2007: modified Makefile.cygwin, Makefile.INCLUDE for cygwin build
  * modified adwarp.c: aside from -force, let user's AFNI_DECONFLICT decide
21 Nov 2007: gifti base64 I/O: lib version 0.3
  * modified giftio_io.[ch], gifti_xml.[ch], gifti_test.c, test.io
        - added I/O routines for base64 via b64_encode_table/b64_decode_table
        - append_to_data_b64(), decode_b64(), copy_b64_data
        - added b64_check/b64_errors to global struct
        - pop_darray: check for b64_errors and  byte-swapping
        - dind is size_t
        - notable functions: gifti_list_index2string, gifti_disp_hex_data
            gifti_check_swap, gifti_swap_Nbytes
26 Nov 2007: modified afni_proc.py, db_mod.py: volreg defaults to cubic interp.
28 Nov 2007: nifti updates for GIFTI and datatype_to/from_string
  * modified nifti1.h: added 5 new INTENTs for GIFTI, plus RGBA32 types
  * modified nifti1.[ch]:
        - added NIFTI_ECODE_FREESURFER
        - added nifti_type_list, an array of nifti_type_ele structs
  * modified nifti_tool.[ch]: added -help_datatypes, to list or test types
  * modified nifti_regress_test/commands/c02.nt.help: added -help_datatypes
  * modified Updates.txt: for these changes
29 Nov 2007:
  * modified nifti1.h, nifti1_io.c: use 2304 for RGBA32 types
  * modified gifti_io.c, gifti_xml.c: fixed nvpair value alloc
02 Dec 2007:
  * modified Makefile.linux_xorg7: added -D_FILE_OFFSET_BITS=64 for LFS
    (will probably do that in other Makefiles)
03 Dec 2007: applied changes for GIFTI Format 1.0 (11/21)
  * replaced Category with Intent
  * replaced Location attribute with ExternalFileName/Offset
  * added NumberOfDataArrays attribute to GIFTI element
  * applied new index_order strings
05 Dec 2007: applied changes for NIfTI release 1.0
  * modified Updates.txt, Makefile, README, CMakeLists.txt
             fsliolib/CMakeLists.txt nifticdf/CMakeLists.txt
             niftilib/CMakeLists.txt znzlib/CMakeLists.txt
06 Dec 2007: applied more NIfTI updates: release 1.0.0 (extra 0 :)
  * modified Makefile README packaging/nifticlib.spec
08 Dec 2007: allowed ANALYZE headers in nifti_hdr_looks_good
10 Dec 2007: gifticlib 0.6
  * modified gifti_io.[ch], gifti_test.[ch], gifti_xml.[ch],
             Makefile, README.gifti
        - can read/write Base64Binary datasets (can set compress level)
        - removed datatype lists (have gifti_type_list)
        - added gifti_read_da_list(), with only partial ability
        - added GIFTI numDA attribute
        - change size_t to long long
  * modified 3dresample.c: allowed for AFNI_DECONFLICT
11 Dec 2007: gifticlib 0.7
  * modified gifti_io.[ch], gifti_xml.c, gifti_test.c, README.gifti:
        - added GIFTI_B64_CHECK defines and disp_gxml_data()
        - set b64_check default to SKIPNCOUNT
12 Dec 2007: gifticlib 0.8
  * modified gifti_io.[ch], gifti_xml.c:
        - added sub-surface selection, via dalist in gifti_read_da_list()
        - added gifti_copy_DataArray, and other structures
13 Dec 2007: modified thd_brainnormalize.h: replace unsetenv with putenv(NO)
14 Dec 2007: modified Makefile.linux_gcc33_64 for building on FC3 (was 2)
27 Dec 2007: added Makefile.macosx_10.5_G5, Makefile.macosx_10.5_Intel
        - contain -dylib_file option, for resolving 10.5 problem with libGL

28 Dec 2007: gifti_io v0.9 (now, with gifti_tool)
  * added gifti_tool.[ch]: replacing gifti_test, with added fucntionality
  * modified gifti_test.[ch]: simplifying the program as a sample
  * modified gifti.get.times, gifti.test.io, Makefile: use tifti_tool
  * modified gifti_io.[ch], gifti_xml.[ch]:
        - made zlib optional, via -DHAVE_ZLIB in compile
          (without zlib, the user will get warnings)
        - now users only #include gifti_io.h, not gifti_xml, expat or zlib
        - added more comments and made tables more readable
        - added all user-variable access functions and reset_user_vars()
        - added gifti_free_image_contents(), gifti_disp_raw_data(),
                gifti_clear_float_zeros() and gifti_set_all_DA_attribs()
        - changed gifti_gim_DA_size to long long
        - added GIFTI_B64_CHECK_UNDEF as 0
        - fixed 0-width indenting and accumulating base64 errors

03 Jan 2008:
  * modified gifti_io.[ch], gifti_xml.[ch] (v0.10)
        - added top-level gifti_create_image() interface
        - must now link libniftiio
        - gifti_add_empty_darray() now takes num_to_add
        - if data was expected but not read, free it
          (can add via gifti_alloc_all_data())
        - many minor changes
  * modified gifti_tool.[ch] (v0.1)
        - can do one of display, write or test (more to come)
        - added dset creation ability and options, via -new_dset or MAKE_IM
          (options -new_*, for numDA, intent, dtype, ndim, dims, data)
        - added AFNI-style DA selection, for input datasets
  * modified README.gifti, gifti/Makefile

11 Jan 2008:
  * modified mri_to_byte.c afni_vol2surf.c mrilib.h:
        - added mri_to_bytemask() and a call to it in afni_vol2surf, for
          using the clustered result in vol2surf
  * modified gifti_io.[ch], gifti_xml.c, Makefile, README.gifti
        - attribute/data setting functions are more flexible
        - added gifti_disp_dtd_url, gifti_set_DA_meta, gifti_valid_int_list,
          DA_data_exists, gifti_add_to_meta 
  * modified gifti_tool.[ch]
        - added option -gifti_dtd_url
        - added options -mod_DAs and -read_DAs (replaced -dalist)
        - added options -mod_add_data, -mod_DA_attr, -mod_DA_meta,
                        -mod_gim_attr, -mod_gim_meta
          (modification takes place at dataset read time)
        - reformatted help output
16 Jan 2008: giftilib 0.12, gifti_tool 0.3
  * modified gifti.test.io: added new -no_updates option
  * modified gifti_io.[ch], gifti_xml.[ch], README.gifti:
       - added gifti_copy_gifti_image() and gifti_convert_to_float()
       - added gifti_valid_LabelTable(), gifticlib_version(),
               gifti_copy_LabelTable(), gifti_updaet_nbyper() and
               gifti_valid_gifti_image()
       - added control over library updates to metadata
       - expanded checks in gifti_valid_dims
  * modified gifti_tool.[ch]:
       - added options -gifti_zlib, -gifti_test, -mod_to_float, -no_updates
  * modified gifti/Makefile: in clean_all, rm gifti*.lo*
  * modified Makefile.INCLUDE: added gifti_tool target

18 Jan 2008: modified 3dclust.c: fixed "MI RL" description in -help
22 Jan 2008: afni_proc.py updates for estimating smoothness
  * modified afni_base.py: added 'short' to comopt.show()
  * modified option_list.py: added 'short' to OptionList.show()
  * modified afni_proc.py:
      - added -show_valid_opts to simply print options
      - added -regress_est_blur_epits, -regress_est_blur_errts,
              -regress_no_mask and -regress_errts_prefix options
      - creation of all_runs always happens now

23 Jan 2008: added useless statements to fix suma crashes on FC7 (compiler?)
  * modified SUMA_Load_Surface_Object.c: added optimization appease message
25 Jan 2008: fixed Makefile.linux_gcc32 to linux motif statically

05 Feb 2008: nifti updates for hans johnson to remove nia.gz functionality
  * modified cmake_testscripts/newfiles_test.sh, commands/c15.new.files,
             utils/nifti1_test.c, niftilib/nifti1_io.c
06 Feb 2008: modified 3dbucket.c to copy fdr curves

13 Feb 2008: beginning GIFTI support in AFNI
  * added gifti_choice.c, thd_gifti.c
  * modified: 3ddata.h, thd_auxdata.c, thd_info.c, thd_opendset.c,
    edt_dsetitems.c, thd_delete.c, thd_loaddblk.c, thd_writedset.c,
    thd_fetchdset.c, thd_mastery.c, thd_niml.c, Makefile.INCLUDE

20 Feb 2008: GIFTI to AFNI
  * modified 3ddata.h: added dtype_nifti_to_niml prototype
  * modified thd_niml.c: added dtype_nifti_to_niml(), plus 2 stupid changes
  * modified thd_gifti.c: added functionality to convert GIFTI to NIML/AFNI
  * modified gifti_io.[ch]: added gifti_get_meta_value and gifti_image_has_data
  * modified Makefile.linux_xorg7_64: to have the option of GIFTI support

21 Feb 2008: GIFTI I/O mostly working
  * modified 3ddata.h: added NI_write_gifti(), NI_find_element_by_aname(),
                             dtype_niml_to_nifti(), nsd_string_atr_to_slist()
  * modified gifti_choice.c: changed NI_write_gifti() prototype
  * modified thd_niml.c:
      - exported nsd_string_atr_to_slist()
      - added dtype_niml_to_nifti(), NI_find_element_by_aname()
  * modified thd_gifti.c: added functions to convert AFNI->GIFTI
  * modified Makefiles: added USE_GIFTI and LGIFTI to uncomment for application
        linux_gcc32 linux_gcc33_64 linux_xorg7 macosx_10.4 macosx_10.4_G5
        macosx_10.4_Intel macosx_10.5_G5 macosx_10.5_Intel solaris28_gcc
        solaris29_suncc solaris29_suncc_64 solaris28_suncc
24 Feb 2008: minor fixes to thd_gifti.c
25 Feb 2008:
  * modified gifti_io.c: metadata element without data is valid
  * modified afni_vol2surf.c: VEDIT_IVAL against fim_index (not thr_index)




AFNI file: AFNI.afnirc
// This is a sample .afnirc file.
// Copy it into your home directory, with the name '.afnirc'.
// Then edit it to your heart's delight.
// See README.setup and README.environment for documentation.

***COLORS
// Define new overlay colors.  These will appear on the color menus.

 salmon = #ff8866
 navy   = navyblue

***ENVIRONMENT

// Most (not all) of the Unix environment variables that affect AFNI

   IDCODE_PREFIX            = AFN  // 3 letter prefix for dataset ID codes

// AFNI_graph_boxes_thick   =  0   // 0=thin lines, 1=thick lines, for graph boxes
// AFNI_graph_grid_thick    =  0   // ditto for the graph vertical grid lines
   AFNI_graph_data_thick    =  1   // ditto for the data graphs
   AFNI_graph_ideal_thick   =  1   // ditto for the ideal graphs
   AFNI_graph_ort_thick     =  1   // ditto for the ort graphs
   AFNI_graph_dplot_thick   =  1   // ditto for the dplot graphs
   AFNI_graph_ggap          =  3   // initial spacing between graph boxes
   AFNI_graph_width         = 512  // initial width of graph window
   AFNI_graph_height        = 384  // initial height of graph window
// AFNI_graph_matrix        =  3   // initial number of sub-graphs
   AFNI_GRAPH_TEXTLIMIT     = 20         // max number of rows shown in Graph popup
// AFNI_GRAPH_BASELINE      = Individual // type of baseline to set in Graph windows
// AFNI_GRAPH_GLOBALBASE    = 0          // value for Global baselines in Graph windows

// AFNI_montage_periodic    = True // allows periodic montage wraparound
// AFNI_purge               = True // allows automatic dataset memory purge

// AFNI_resam_vox           = 1.0  // dimension of voxel (mm) for resampled datasets
// AFNI_resam_anat          = Li   // One of NN, Li, Cu, Bk for Anat resampling mode
// AFNI_resam_func          = NN   // ditto for Func resampling mode
// AFNI_resam_thr           = NN   // for Threshold resampling mode

// AFNI_pbar_posfunc        = True   // will start color pbar as all positive
// AFNI_pbar_sgn_pane_count = 8      // # of panes to start signed color pbar with
// AFNI_pbar_pos_pane_count = 8      // # of panes to start positive color pbar with
// AFNI_pbar_hide           = True   // hide color pbar when it is being altered
// AFNI_PBAR_IMXY           = 200x20 // size of saved pbar color image
// AFNI_PBAR_LOCK           = YES    // lock color pbars together
// AFNI_OVERLAY_ZERO        = NO     // YES==colorize zero values in Overlay dataset

// AFNI_THRESH_LOCK         = YES    // lock threshold sliders together
// AFNI_THRESH_AUTO         = YES    // YES==AFNI guesses a threshold
// AFNI_SLAVE_FUNCTIME      = NO     // YES==time index changes overlay AND underlay
// AFNI_SLAVE_THRTIME       = NO     // YES==time index changes threshold, too

// AFNI_COLORSCALE_DEFAULT  = Spectrum:red_to_blue // initial colorscale for pbar

// AFNI_chooser_listmax     = 20  // max items in a chooser before scrollbars appear
// AFNI_MAX_OPTMENU         = 999 // max # items in an 'option menu'
// AFNI_DONT_MOVE_MENUS     = YES // don't try to move popup menu windows
// AFNI_MENU_COLSIZE        = 30  // max number of entries in a popup menu column
// AFNI_DISABLE_TEAROFF     = NO  // YES==disable the menu 'tearoff' capability

// AFNI_DONT_SORT_ENVIRONMENT = NO   // YES==disable sorting Edit Environment
// AFNI_ORIENT                = RAI  // coordinate order
// AFNI_NOPLUGINS             = NO   // YES==disable plugins
// AFNI_YESPLUGOUTS           = NO   // YES==enable plugouts (POs)
// AFNI_PLUGOUT_TCP_BASE      = 6666 // overrides default TCP/IP socket for plugouts

// AFNI_PLUGINPATH          = /home/rwcox/abin  // directory for plugins
// AFNI_TSPATH              = /home/rwcox/stuff // directory for .1D files
// AFNI_MODELPATH           = /home/rwcox/abin  // directory for NLfim models
// TMPDIR                   = /tmp              // directory for temporary files
// AFNI_GLOBAL_SESSION      = /data/junk        // directory w/datasets you always see

// AFNI_BYTEORDER             = LSB_FIRST // to force .BRIK byte order on output
// AFNI_BYTEORDER_INPUT       = LSB_FIRST // when .HEAD file fails to specify
// AFNI_NO_BYTEORDER_WARNING  = YES       // do NOT print out byte-ordering warning

   AFNI_SESSTRAIL           = 1    // # of directory levels to show in filenames
   AFNI_HINTS               = YES  // YES==turns on popup hints
// AFNI_COMPRESSOR          = gzip // force all .BRIK output to be compressed
   AFNI_AUTOGZIP            = YES  // gzip .BRIK files if it's a good idea
// AFNI_NOMMAP              = YES  // to disable use of mmap() file I/O
   AFNI_LEFT_IS_LEFT        = YES  // YES==show human left on screen left
// AFNI_ENFORCE_ASPECT      = YES  // AFNI to enforce image aspect ratio
   AFNI_ALWAYS_LOCK         = YES  // to start with all AFNI controllers locked
// AFNI_NOREALPATH          = NO   // don't convert filenames to 'real' names
// AFNI_NO_MCW_MALLOC       = NO   // YES==turn off debugging malloc use
   AFNI_FLOATSCAN           = YES  // YES==scan float datasets for errors

// AFNI_NOSPLASH            = NO      // YES==turn off the AFNI splash window
   AFNI_SPLASH_XY           = 444:222 // x:y coordinates for splash window
// AFNI_SPLASHTIME          = 3       // how many seconds splash window stays up

// AFNI_NOTES_DLINES        = 11   // # of text entry lines in the Notes plugin
// AFNI_MARKERS_NOQUAL      = NO   // YES==AFNI won't do 'quality' for markers
   AFNI_NO_ADOPTION_WARNING = YES  // YES==AFNI won't show dataset 'adoption' warnings
   AFNI_VIEW_ANAT_BRICK     = YES  // try to view data without warp-on-demand
   AFNI_VIEW_FUNC_BRICK     = YES  // try to view data without warp-on-demand
   AFNI_tsplotgeom          = 512x384 // size of time series plot windows
   AFNI_PLUGINS_ALPHABETIZE = YES  // whether to alphabetize Plugins menu
// AFNI_VOLREG_EDGING       = 5    // size of edge region to mask out in 3dvolreg
// AFNI_ROTA_ZPAD           = 5    // size of zero padding to use in 3dvolreg

   AFNI_ncolors               =  80   // number of gray levels to use in underlay
   AFNI_gamma                 =  1.7  // gamma correction for underlay intensities
   AFNI_GRAYSCALE_BOT         = 25    // minimum image intensity graylevel (0-255)
   AFNI_IMAGE_MINFRAC         = 0.04  // minimum size of AFNI image window
   AFNI_IMAGE_MAXFRAC         = 0.88  // maximum size of AFNI image window
// AFNI_IMAGE_MINTOMAX        = NO    // YES=start Image window in Min-to-Max mode
// AFNI_IMAGE_CLIPPED         = NO    // YES=start Image window in Clipped mode
// AFNI_IMAGE_CLIPBOT         = 0.25  // bottom level scaling for Clipped mode
// AFNI_IMAGE_CLIPTOP         = 1.0   // top level scaling for Clipped mode
// AFNI_IMAGE_GLOBALRANGE     = NO    // YES=scale Image grayleves in 3D
   AFNI_KEEP_PANNING          = YES   // keep Pan mode turned on in Image windows
// AFNI_IMAGE_LABEL_MODE      = 1     // draw labels in upper left of Image windows
// AFNI_IMAGE_LABEL_SIZE      = 2     // size of labels in Image windows
// AFNI_IMAGE_LABEL_COLOR     = white // color of labels in Image windows
// AFNI_IMAGE_LABEL_SETBACK   = 0.01  // distance from edges for labels
// AFNI_CROSSHAIR_LINES       = YES   // draw crosshairs with lines, not voxels
// AFNI_CROP_ZOOMSAVE         = NO    // how to save zoomed Image windows
// AFNI_IMAGE_ZEROCOLOR       = white // color to show for 0 voxels in Image window
// AFNI_IMAGE_ENTROPY         = 0.2   // image entropy at which to disable 2%-to-98%
// AFNI_IMAGE_ZOOM_NN         = NO    // YES==don't linearly interpolate zoomed images
// AFNI_IMAGE_SAVESQUARE      = NO    // YES==always save images with square pixels
// AFNI_IMAGE_TICK_DIV_IN_MM  = NO    // YES==image tick mark spacings are in mm
// AFNI_IMAGRA_CLOSER         = NO    // YES==Image/Graph button second clicks closes
   AFNI_DEFAULT_OPACITY       = 8     // default opacity level for Image windows
   AFNI_DEFAULT_IMSAVE        = jpg   // default Image window Save format
// AFNI_OLD_PPMTOBMP          = NO    // YES==color quantize BMP output images
   AFNI_VIDEO_DELAY           = 66    // ms between 'v' key image cycling
// AFNI_STROKE_THRESHOLD      = 8     // min mouse movement for grayscale edit
// AFNI_STROKE_AUTOPLOT       = YES   // YES=show grayscale histogram in edit
// AFNI_NO_SIDES_LABELS       = NO    // YES==AFNI won't show 'left=Left' labels

// AFNI_MINC_DATASETS       = YES  // try to read .mnc files as datasets
// AFNI_MINC_FLOATIZE       = YES  // convert .mnc files to floats on input
// AFNI_MINC_SLICESCALE     = YES  // scale each .mnc slice separately
// AFNI_ANALYZE_DATASETS    = YES  // read ANALYZE-7.5 files as datasets
// AFNI_ANALYZE_FLOATIZE    = YES  // convert ANALYZE data to floats on input
// AFNI_ANALYZE_SCALE       = YES  // use the 'funused1' value for scaling
// AFNI_ANALYZE_ORIGINATOR  = YES  // use the SPM ORIGINATOR field
// AFNI_ANALYZE_ORIENT      = LPI  // orientation for ANALYZE datasets
// AFNI_ANALYZE_AUTOCENTER  = NO   // make center of file have (x,y,z)=(0,0,0)?
// AFNI_MPEG_DATASETS       = NO   // YES==try to read .mpg files as datasets
// AFNI_MPEG_GRAYIZE        = NO   // YES==convert .mpg datasets to grayscale

// AFNI_START_SMALL         = NO   // set initial AFNI dataset to smallest one
   AFNI_DISP_SCROLLBARS     = YES  // YES==show scrollbars on Disp panel
// AFNI_VALUE_LABEL         = YES  // show data value label in Define Overlay

// AFNI_SUMA_LINECOLOR      = blue   // color for surface lines from SUMA
// AFNI_SUMA_LINESIZE       = 2      // thickness of lines from SUMA
// AFNI_SUMA_BOXSIZE        = 3      // size of node boxes from SUMA
// AFNI_SUMA_BOXCOLOR       = yellow // color for node boxes from SUMA
// AFNI_SHOW_SURF_POPUPS    = NO     // YES==see info windows from SUMA data transfers
// AFNI_KILL_SURF_POPUPS    = NO     // YES==don't see any info from SUMA data xfers

// AFNI_LOAD_PRINTSIZE      = 100M   // print warning that large file is being loaded
// AFNI_VERSION_CHECK       = YES    // NO==disable weekly version check over Web
// AFNI_MOTD_CHECK          = YES    // NO==disable display of Message-of-the-Day
// AFNI_AGIF_DELAY          = 10     // centi-seconds between animated GIF frames
// AFNI_MPEG_FRAMERATE      = 24     // MPEG-1 frame rate for saved movies

// AFNI_SLICE_SPACING_IS_GAP = NO   // YES==fix GE DICOM error
// AFNI_DICOM_RESCALE        = NO   // YES==use DICOM rescale tags
// AFNI_DICOM_WINDOW         = NO   // YES==use DICOM window tags

// AFNI_RESCAN_METHOD       = Add    // add new datasets, don't replace old ones
// AFNI_STARTUP_WARNINGS    = YES    // NO==turn off some warning message at startup
// AFNI_1D_TIME             = NO     // YES==.1D files columns are the time axis
// AFNI_1D_TIME_TR          = 1.0    // value for TR of a .1D time file
// AFNI_3D_BINARY           = YES    // YES==save .3D files in binary format
// AFNI_DRAW_UNDOSIZE       = 4      // # Mbytes for Draw Dataset undo buffer

// AFNI_DISABLE_CURSORS     = NO     // YES==don't try to change X11 cursors
// AFNI_CLICK_MESSAGE       = NO     // YES==see stupid 'click here to pop down' message
// AFNI_X11_REDECORATE      = YES    // NO==don't try to change X11 window controls
// AFNI_MAX_1DSIZE          = 66666  // max size of .1D files to automatically read
// AFNI_TITLE_LABEL2        = NO     // YES==use dataset 'label2' field in titlebar
// AFNI_EDGIZE_OVERLAY      = NO     // YES==show only edges of color overlay blobs
// AFNI_DONT_LOGFILE        = NO     // YES==don't log AFNI programs to ~/.afni.log
// AFNI_WRITE_NIML          = NO     // YES==write .HEAD files in NIML format
// AFNI_TTATLAS_CAUTION     = YES    // NO==disable warning message in 'wherami'
// AFNI_RESCAN_AT_SWITCH    = YES    // YES==rescan directory for new datasets
// AFNI_DATASET_BROWSE      = YES    // YES==dataset item selection acts immediately
// AFNI_OVERLAY_ONTOP       = YES    // YES==put 'Overlay' button above 'Underlay'

// AFNI_NIML_START          = YES         // start NIML listening when AFNI starts
// NIML_TRUSTHOST_01        = 192.168.0.1 // IP address of host to trust for NIML

   AFNI_plug_drawdset_butcolor = #992066  // For the Plugins menu.
// AFNI_plug_histog_butcolor   = #663199  // Colors are drawn from
   AFNI_plug_crender_butcolor  = #cc1033  // the RBGCYC map in afni.h

// AFNI_hotcolor               = navyblue // color for 'Done', 'Set', etc.

// AFNI_NO_NEGATIVES_WARNING  = NO   // YES==to3d won't warn about negative values
// AFNI_TO3D_ZPAD             = 0    // # of zero padding slices to add in to3d
// AFNI_TRY_DICOM_LAST        = NO   // YES==DICOM is last image format tried in to3d
// AFNI_ALLOW_MILLISECONDS    = NO   // YES==allow 'ms' time units in to3d

// AFNI_STARTUP_SCRIPT        = /home/rwcox/.afni_script // script to run at AFNI start



AFNI file: AFNI.Xdefaults
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!  How to set up AFNI defaults using X11:
!!   a) put lines like this in your .Xdefaults file in your home directory;
!!   b) edit to fit your needs
!!   c) log out and log back in (or use the command "xrdb -merge .Xdefaults")
!!
!!  The values in this file are the values "hard-wired" into AFNI, and
!!  so you only need to put into the .Xdefaults file those values you
!!  wish to change.
!!
!!  The resources up to and including AFNI*gamma also apply
!!  to the program TO3D -- all those after are specific to AFNI.

!! font to use in most widgets
AFNI*fontList:		9x15bold=charset1

!! background color in most widgets
AFNI*background:	gray40
AFNI*borderColor:	gray40

!! background color in most popup and pulldown menu panes
!! (this choice gives some contrast with the gray40 overall background)
AFNI*menu*background:	black

!! foreground color in most widgets
AFNI*foreground:	yellow

!! color in the "trough" of the slider controls (images and threshold)
AFNI*troughColor:	green

!! color for quit and other "hot" buttons
AFNI*hotcolor:		red3

!! gray/color levels used for image display
!! (overridden by the -ncol option)
AFNI*ncolors:		100

!! correction for screen
!! (overridden by the -gamma option)
AFNI*gamma:		1.0

!! This option is actually only for TO3D;
!! it specifies the initial value to put in the
!! field-of-view widget (in mm).
AFNI*init_fov:		240.0

!!****
!!**** Resources below here apply only to AFNI, not to TO3D
!!****
!! auto-purge datasets from memory? (True or False)
!! (overridden by the -purge option)
AFNI*purge:		False

!! Whether to use the "big" Talairach box, which
!! extends 10 mm more inferior than the AFNI 1.0x
!! to accomodate the cerebellum.
AFNI*tlrc_big:		True

!! Whether or not to use periodic montage layouts.
AFNI*montage_periodic:	True

!! Use these to set the colors used in the BHelp popup
!! AFNI*help*background:	#ffffaa
!! AFNI*help*foreground:	black

!! Set this to False to turn off the window manager
!! borders on the BHelp popup
AFNI*help*helpborder:	True

!! number of slices to scroll in image viewers when
!! Shift key is pressed along with arrowpad button
AFNI*bigscroll:		5

!! default resampling modes (from the set NN, Li, Cu, Bk)
!! and voxel dimension (always cubical, in mm)
AFNI*resam_anat:	Li
AFNI*resam_func:	NN
AFNI*resam_vox:		1.0

!! Whether to pop a list chooser down on double click or not
!! "Set"   means double click is the same as the Set button
!!           (and will pop the chooser down)
!! "Apply" means double click is the same as the Apply button
!!           (and will keep the chooser up)
!!
AFNI*chooser_doubleclick:	Set

!! For scrolling list choosers (the "Switch" buttons),
!! defines the max number of entries to display in
!! a window before attaching scrollbars.
!! (N.B.: if the number of entries to choose between
!!        is only a few more than this, then the
!!        window will be expanded and no scrollbars used.)
AFNI*chooser_listmax:		10

!! Initial dimensions of graphing region, in pixels
AFNI*graph_width:	512
AFNI*graph_height:	512

!! Initial number of points to ignore in graphs and FIMs
!! (overridden by the -ignore option)
AFNI*fim_ignore:	0

!! number of overlay colors to allocate: from 2 to 99
AFNI*ncolovr:		20

!! Definitions of colors (RGB or color database strings).
!! Note that color number 0 means "none" and can't be redefined.
!! These color indices (1 .. ncolovr) can be used in various places below.

!! Note that if you just want to add new colors, you can
!!  a) set AFNI*ncolovr to a larger value
!!  b) supply "ovdef" and "ovlab" values for each new color index
!!       from 21 .. ncolovr

AFNI*ovdef01:	#ffff00
AFNI*ovdef02:	#ffcc00
AFNI*ovdef03:	#ff9900
AFNI*ovdef04:	#ff6900
AFNI*ovdef05:	#ff4400
AFNI*ovdef06:	#ff0000
AFNI*ovdef07:	#0000ff
AFNI*ovdef08:	#0044ff
AFNI*ovdef09:	#0069ff
AFNI*ovdef10:	#0099ff
AFNI*ovdef11:	#00ccff
AFNI*ovdef12:	#00ffff
AFNI*ovdef13:	green
AFNI*ovdef14:	limegreen
AFNI*ovdef15:	violet
AFNI*ovdef16:	hotpink
AFNI*ovdef17:	white
AFNI*ovdef18:	#dddddd
AFNI*ovdef19:	#bbbbbb
AFNI*ovdef20:	black

!! Labels used for colors in "choosers"
!! (only 1st 9 characters are used).

AFNI*ovlab01:	yellow
AFNI*ovlab02:	yell-oran
AFNI*ovlab03:	oran-yell
AFNI*ovlab04:	orange
AFNI*ovlab05:	oran-red
AFNI*ovlab06:	red
AFNI*ovlab07:	dk-blue
AFNI*ovlab08:	blue
AFNI*ovlab09:	lt-blue1
AFNI*ovlab10:	lt-blue2
AFNI*ovlab11:	blue-cyan
AFNI*ovlab12:	cyan
AFNI*ovlab13:	green
AFNI*ovlab14:	limegreen
AFNI*ovlab15:	violet
AFNI*ovlab16:	hotpink
AFNI*ovlab17:	white
AFNI*ovlab18:	gry-dd
AFNI*ovlab19:	gry-bb
AFNI*ovlab20:	black

!! index of color used for crosshairs at startup
AFNI*ovcrosshair:	13

!! color used for primary marker at startup
AFNI*ovmarksprimary:	17

!! color used for secondary markers at startup
AFNI*ovmarkssecondary:	14

!! pixel width for markers at startup
AFNI*markssize:		8

!! pixel gap for markers at startup
AFNI*marksgap:		3

!! pixel gap for crosshairs at startup
AFNI*crosshairgap:	5

!! Used to set default colors for graph windows.
!! The values are positive color indices, or
!! can be  -1 == brightest color in the overlay list
!!         -2 == darkest color
!!         -3 == reddest color
!!         -4 == greenest color
!!         -5 == bluest color

!! boxes  == Outlines drawn around each graph
!! backg  == Background
!! grid   == Uniformly spaced vertical lines in each graph
!! text   == Text (except for value under current time index)
!! data   == Data timeseries graph
!! ideal  == Ideal timeseries graph
!!             (also used to indicate the current time index)
!! ort    == Ort timeseries graph
!! ignore == Used to indicate which points are ignored for FIM
!! dplot  == Double plot overlay color

AFNI*graph_boxes_color:  -2
AFNI*graph_backg_color:  -1
AFNI*graph_grid_color:    1
AFNI*graph_text_color:   -2
AFNI*graph_data_color:   -2
AFNI*graph_ideal_color:  -3
AFNI*graph_ort_color:    -4
AFNI*graph_ignore_color: -5
AFNI*graph_dplot_color:  -3

!! Used to set the whether certain types of
!! lines in the graph windows are thick or
!! not.  Use 0 to indicate "thin" and use
!! "1" to indicate "thick".

AFNI*graph_boxes_thick:   0
AFNI*graph_grid_thick:    0
AFNI*graph_data_thick:    0
AFNI*graph_ideal_thick:   0
AFNI*graph_ort_thick:     0
AFNI*graph_dplot_thick:   0

!! Used to set the gap between sub-graphs

AFNI*graph_ggap:          0

!! Used to set the font for drawing text into
!! graph windows.  The default font is chosen
!! from a list "tfont_hopefuls" in the source file
!! display.h.  You can find out what fonts are
!! available on your system by using the command
!! "xlsfonts | more"

AFNI*gfont:               7x14

!! Used to set the default fim polort order

AFNI*fim_polort:          1

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! font to use in pbar widgets
AFNI*pbar*fontList:  7x13bold=charset1

!! A smaller font for pbar
!!AFNI*pbar*fontList:  6x10=charset1

!! start pbar in positive mode (True or False)
AFNI*pbar_posfunc:	False

!! hide process of changing pbar panes (True or False)
AFNI*pbar_hide:		False

!! initial number of panes in the pbar (pos and sgn modes)
AFNI*pbar_pos_pane_count:	8
AFNI*pbar_sgn_pane_count:	9

!! Set the color "pbar" initial thresholds and colors
!!
!!  _pos    --> positive only pbar (range from 1.0 to  0.0)
!!  _sgn    --> signed pbar        (range from 1.0 to -1.0)
!!
!!  _panexx --> data for case with xx panes (from 02 to 10)
!!
!!  _thryy  --> yy'th threshold:  00 is top (always 1.0),
!!                                01 is next to top, up to yy = xx
!!                                (always 0.0 for pos_, -1.0 for sgn_)
!!
!!  _ovyy   --> yy'th color index: 00 is top pane, up to yy = xx-1
!!
!! The thr values must decrease monotonically with yy.
!! The ov values must be color indices from the ovdef table
!! (including color 0 --> no color).
!!
!! N.B.: If you supply values for a particular xx, you must
!!       supply ALL the values (_thr and _ov), or AFNI will
!!       ignore these values and use its built in defaults
!!       for that number of panes.

AFNI*pbar_pos_pane02_thr00:	1.0
AFNI*pbar_pos_pane02_thr01:	0.5
AFNI*pbar_pos_pane02_thr02:	0.0

AFNI*pbar_pos_pane02_ov00:	1
AFNI*pbar_pos_pane02_ov01:	0

AFNI*pbar_pos_pane03_thr00:	1.0
AFNI*pbar_pos_pane03_thr01:	0.67
AFNI*pbar_pos_pane03_thr02:	0.33
AFNI*pbar_pos_pane03_thr03:	0.0

AFNI*pbar_pos_pane03_ov00:	1
AFNI*pbar_pos_pane03_ov01:	6
AFNI*pbar_pos_pane03_ov02:	0

AFNI*pbar_pos_pane04_thr00:	1.0
AFNI*pbar_pos_pane04_thr01:	0.75
AFNI*pbar_pos_pane04_thr02:	0.50
AFNI*pbar_pos_pane04_thr03:	0.25
AFNI*pbar_pos_pane04_thr04:	0.00

AFNI*pbar_pos_pane04_ov00:	1
AFNI*pbar_pos_pane04_ov01:	4
AFNI*pbar_pos_pane04_ov02:	6
AFNI*pbar_pos_pane04_ov03:	0

AFNI*pbar_pos_pane05_thr00:	1.0
AFNI*pbar_pos_pane05_thr01:	0.80
AFNI*pbar_pos_pane05_thr02:	0.60
AFNI*pbar_pos_pane05_thr03:	0.40
AFNI*pbar_pos_pane05_thr04:	0.20
AFNI*pbar_pos_pane05_thr05:	0.00

AFNI*pbar_pos_pane05_ov00:	1
AFNI*pbar_pos_pane05_ov01:	3
AFNI*pbar_pos_pane05_ov02:	5
AFNI*pbar_pos_pane05_ov03:	6
AFNI*pbar_pos_pane05_ov04:	0

AFNI*pbar_pos_pane06_thr00:	1.0
AFNI*pbar_pos_pane06_thr01:	0.84
AFNI*pbar_pos_pane06_thr02:	0.67
AFNI*pbar_pos_pane06_thr03:	0.50
AFNI*pbar_pos_pane06_thr04:	0.33
AFNI*pbar_pos_pane06_thr05:	0.16
AFNI*pbar_pos_pane06_thr06:	0.00

AFNI*pbar_pos_pane06_ov00:	1
AFNI*pbar_pos_pane06_ov01:	2
AFNI*pbar_pos_pane06_ov02:	3
AFNI*pbar_pos_pane06_ov03:	5
AFNI*pbar_pos_pane06_ov04:	6
AFNI*pbar_pos_pane06_ov05:	0

AFNI*pbar_pos_pane07_thr00:	1.0
AFNI*pbar_pos_pane07_thr01:	0.90
AFNI*pbar_pos_pane07_thr02:	0.75
AFNI*pbar_pos_pane07_thr03:	0.60
AFNI*pbar_pos_pane07_thr04:	0.45
AFNI*pbar_pos_pane07_thr05:	0.30
AFNI*pbar_pos_pane07_thr06:	0.15
AFNI*pbar_pos_pane07_thr07:	0.00

AFNI*pbar_pos_pane07_ov00:	1
AFNI*pbar_pos_pane07_ov01:	2
AFNI*pbar_pos_pane07_ov02:	3
AFNI*pbar_pos_pane07_ov03:	4
AFNI*pbar_pos_pane07_ov04:	5
AFNI*pbar_pos_pane07_ov05:	6
AFNI*pbar_pos_pane07_ov06:	0

AFNI*pbar_pos_pane08_thr00:	1.0
AFNI*pbar_pos_pane08_thr01:	0.80
AFNI*pbar_pos_pane08_thr02:	0.70
AFNI*pbar_pos_pane08_thr03:	0.60
AFNI*pbar_pos_pane08_thr04:	0.50
AFNI*pbar_pos_pane08_thr05:	0.40
AFNI*pbar_pos_pane08_thr06:	0.30
AFNI*pbar_pos_pane08_thr07:	0.15
AFNI*pbar_pos_pane08_thr08:	0.00

AFNI*pbar_pos_pane08_ov00:	1
AFNI*pbar_pos_pane08_ov01:	2
AFNI*pbar_pos_pane08_ov02:	3
AFNI*pbar_pos_pane08_ov03:	4
AFNI*pbar_pos_pane08_ov04:	5
AFNI*pbar_pos_pane08_ov05:	6
AFNI*pbar_pos_pane08_ov06:	16
AFNI*pbar_pos_pane08_ov07:	0

AFNI*pbar_pos_pane09_thr00:	1.0
AFNI*pbar_pos_pane09_thr01:	0.90
AFNI*pbar_pos_pane09_thr02:	0.80
AFNI*pbar_pos_pane09_thr03:	0.70
AFNI*pbar_pos_pane09_thr04:	0.60
AFNI*pbar_pos_pane09_thr05:	0.50
AFNI*pbar_pos_pane09_thr06:	0.25
AFNI*pbar_pos_pane09_thr07:	0.15
AFNI*pbar_pos_pane09_thr08:	0.05
AFNI*pbar_pos_pane09_thr09:	0.00

AFNI*pbar_pos_pane09_ov00:	1
AFNI*pbar_pos_pane09_ov01:	2
AFNI*pbar_pos_pane09_ov02:	3
AFNI*pbar_pos_pane09_ov03:	4
AFNI*pbar_pos_pane09_ov04:	5
AFNI*pbar_pos_pane09_ov05:	6
AFNI*pbar_pos_pane09_ov06:	16
AFNI*pbar_pos_pane09_ov07:	15
AFNI*pbar_pos_pane09_ov08:	0

AFNI*pbar_pos_pane10_thr00:	1.0
AFNI*pbar_pos_pane10_thr01:	0.90
AFNI*pbar_pos_pane10_thr02:	0.80
AFNI*pbar_pos_pane10_thr03:	0.70
AFNI*pbar_pos_pane10_thr04:	0.60
AFNI*pbar_pos_pane10_thr05:	0.50
AFNI*pbar_pos_pane10_thr06:	0.40
AFNI*pbar_pos_pane10_thr07:	0.30
AFNI*pbar_pos_pane10_thr08:	0.20
AFNI*pbar_pos_pane10_thr09:	0.10
AFNI*pbar_pos_pane10_thr10:	0.00

AFNI*pbar_pos_pane10_ov00:	1
AFNI*pbar_pos_pane10_ov01:	2
AFNI*pbar_pos_pane10_ov02:	3
AFNI*pbar_pos_pane10_ov03:	4
AFNI*pbar_pos_pane10_ov04:	5
AFNI*pbar_pos_pane10_ov05:	6
AFNI*pbar_pos_pane10_ov06:	16
AFNI*pbar_pos_pane10_ov07:	15
AFNI*pbar_pos_pane10_ov08:	7
AFNI*pbar_pos_pane10_ov09:	0

AFNI*pbar_sgn_pane02_thr00:	1.0
AFNI*pbar_sgn_pane02_thr01:	0.0
AFNI*pbar_sgn_pane02_thr02:	-1.0

AFNI*pbar_sgn_pane02_ov00:	1
AFNI*pbar_sgn_pane02_ov01:	11

AFNI*pbar_sgn_pane03_thr00:	1.0
AFNI*pbar_sgn_pane03_thr01:	0.05
AFNI*pbar_sgn_pane03_thr02:	-0.05
AFNI*pbar_sgn_pane03_thr03:	-1.0

AFNI*pbar_sgn_pane03_ov00:	1
AFNI*pbar_sgn_pane03_ov01:	0
AFNI*pbar_sgn_pane03_ov02:	11

AFNI*pbar_sgn_pane04_thr00:	1.0
AFNI*pbar_sgn_pane04_thr01:	0.50
AFNI*pbar_sgn_pane04_thr02:	0.0
AFNI*pbar_sgn_pane04_thr03:	-0.50
AFNI*pbar_sgn_pane04_thr04:	-1.0

AFNI*pbar_sgn_pane04_ov00:	1
AFNI*pbar_sgn_pane04_ov01:	4
AFNI*pbar_sgn_pane04_ov02:	8
AFNI*pbar_sgn_pane04_ov03:	11

AFNI*pbar_sgn_pane05_thr00:	1.0
AFNI*pbar_sgn_pane05_thr01:	0.50
AFNI*pbar_sgn_pane05_thr02:	0.05
AFNI*pbar_sgn_pane05_thr03:	-0.05
AFNI*pbar_sgn_pane05_thr04:	-0.50
AFNI*pbar_sgn_pane05_thr05:	-1.0

AFNI*pbar_sgn_pane05_ov00:	1
AFNI*pbar_sgn_pane05_ov01:	4
AFNI*pbar_sgn_pane05_ov02:	0
AFNI*pbar_sgn_pane05_ov03:	8
AFNI*pbar_sgn_pane05_ov04:	11

AFNI*pbar_sgn_pane06_thr00:	1.0
AFNI*pbar_sgn_pane06_thr01:	0.66
AFNI*pbar_sgn_pane06_thr02:	0.33
AFNI*pbar_sgn_pane06_thr03:	0.00
AFNI*pbar_sgn_pane06_thr04:	-0.33
AFNI*pbar_sgn_pane06_thr05:	-0.66
AFNI*pbar_sgn_pane06_thr06:	-1.0

AFNI*pbar_sgn_pane06_ov00:	1
AFNI*pbar_sgn_pane06_ov01:	3
AFNI*pbar_sgn_pane06_ov02:	5
AFNI*pbar_sgn_pane06_ov03:	7
AFNI*pbar_sgn_pane06_ov04:	9
AFNI*pbar_sgn_pane06_ov05:	11

AFNI*pbar_sgn_pane07_thr00:	1.0
AFNI*pbar_sgn_pane07_thr01:	0.66
AFNI*pbar_sgn_pane07_thr02:	0.33
AFNI*pbar_sgn_pane07_thr03:	0.05
AFNI*pbar_sgn_pane07_thr04:	-0.05
AFNI*pbar_sgn_pane07_thr05:	-0.33
AFNI*pbar_sgn_pane07_thr06:	-0.66
AFNI*pbar_sgn_pane07_thr07:	-1.0

AFNI*pbar_sgn_pane07_ov00:	1
AFNI*pbar_sgn_pane07_ov01:	3
AFNI*pbar_sgn_pane07_ov02:	5
AFNI*pbar_sgn_pane07_ov03:	0
AFNI*pbar_sgn_pane07_ov04:	7
AFNI*pbar_sgn_pane07_ov05:	9
AFNI*pbar_sgn_pane07_ov06:	11

AFNI*pbar_sgn_pane08_thr00:	1.0
AFNI*pbar_sgn_pane08_thr01:	0.75
AFNI*pbar_sgn_pane08_thr02:	0.50
AFNI*pbar_sgn_pane08_thr03:	0.25
AFNI*pbar_sgn_pane08_thr04:	0.00
AFNI*pbar_sgn_pane08_thr05:	-0.25
AFNI*pbar_sgn_pane08_thr06:	-0.50
AFNI*pbar_sgn_pane08_thr07:	-0.75
AFNI*pbar_sgn_pane08_thr08:	-1.00

AFNI*pbar_sgn_pane08_ov00:	1
AFNI*pbar_sgn_pane08_ov01:	2
AFNI*pbar_sgn_pane08_ov02:	4
AFNI*pbar_sgn_pane08_ov03:	5
AFNI*pbar_sgn_pane08_ov04:	8
AFNI*pbar_sgn_pane08_ov05:	9
AFNI*pbar_sgn_pane08_ov06:	10
AFNI*pbar_sgn_pane08_ov07:	11

AFNI*pbar_sgn_pane09_thr00:	1.0
AFNI*pbar_sgn_pane09_thr01:	0.75
AFNI*pbar_sgn_pane09_thr02:	0.50
AFNI*pbar_sgn_pane09_thr03:	0.25
AFNI*pbar_sgn_pane09_thr04:	0.05
AFNI*pbar_sgn_pane09_thr05:	-0.05
AFNI*pbar_sgn_pane09_thr06:	-0.25
AFNI*pbar_sgn_pane09_thr07:	-0.50
AFNI*pbar_sgn_pane09_thr08:	-0.75
AFNI*pbar_sgn_pane09_thr09:	-1.00

AFNI*pbar_sgn_pane09_ov00:	1
AFNI*pbar_sgn_pane09_ov01:	2
AFNI*pbar_sgn_pane09_ov02:	4
AFNI*pbar_sgn_pane09_ov03:	5
AFNI*pbar_sgn_pane09_ov04:	0
AFNI*pbar_sgn_pane09_ov05:	8
AFNI*pbar_sgn_pane09_ov06:	9
AFNI*pbar_sgn_pane09_ov07:	10
AFNI*pbar_sgn_pane09_ov08:	11

AFNI*pbar_sgn_pane10_thr00:	1.0
AFNI*pbar_sgn_pane10_thr01:	0.80
AFNI*pbar_sgn_pane10_thr02:	0.60
AFNI*pbar_sgn_pane10_thr03:	0.40
AFNI*pbar_sgn_pane10_thr04:	0.20
AFNI*pbar_sgn_pane10_thr05:	0.00
AFNI*pbar_sgn_pane10_thr06:	-0.20
AFNI*pbar_sgn_pane10_thr07:	-0.40
AFNI*pbar_sgn_pane10_thr08:	-0.60
AFNI*pbar_sgn_pane10_thr09:	-0.80
AFNI*pbar_sgn_pane10_thr10:	-1.00

AFNI*pbar_sgn_pane10_ov00:	1
AFNI*pbar_sgn_pane10_ov01:	2
AFNI*pbar_sgn_pane10_ov02:	3
AFNI*pbar_sgn_pane10_ov03:	4
AFNI*pbar_sgn_pane10_ov04:	5
AFNI*pbar_sgn_pane10_ov05:	7
AFNI*pbar_sgn_pane10_ov06:	8
AFNI*pbar_sgn_pane10_ov07:	9
AFNI*pbar_sgn_pane10_ov08:	10
AFNI*pbar_sgn_pane10_ov09:	11

!! End of MCW AFNI X11 Resources
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!