The CMS User Analysis Farm (UAF).




Contents



















Introduction






Diagram of the cms interactive farm


The User Analysis Farm (UAF) is part of the Tier 1 facility at Fermilab. The UAF is a small farm running the LINUX operating system providing interactive and batch computing for users.  Load balancing, resource and process management are realized with  FBSNG , the batch system developed at Fermilab. The system consists of scripts on top of FBSNG which control the interactive login and provide a mechanism to obtain kerberos credentials and to securely forward x-window socket connections.  We chose this solution for the following among other reasons:
Currently the system consists of 8 popcrns running the redhat 6.x operating system and of 10 hotdogs running redhat 7.x. Click here  to see a summary of the properties of these nodes. Low level monitoring of the hardware like CPU usage, memory usage etc. is provided by GANGLIA  and for the UAF have a look here. To check the status of the batch system please have a look here. Below is a snapshot of what you will find on this web page in the Queues section.

FBSNG on the web
Farm: CMS_User_Cluster
Time: Thu Jun 12 11:43:58 2003
Report: List of queues
Queues
Jobs
Nodes
Process Types
Name Status Default Process Type Share Prio Waiting Ready Running Total
BENCHMARK OK gworker 10.00 0 0 0 0 0
IOQ OK IO 10.00 0 0 0 0 0
interactive OK iaworker 10.00 0 0 0 1 1
long OK batch 1.00 0 0 0 0 0
medium OK batch 1.00 1000 0 0 0 0
short OK short_batch 1.00 5000 0 0 0 0

FBSNG

FBSNG httpd version 0.1

Below is the table of queues relevant to the user. In general the longer the time limit on the queue the lower is the priority so that short jobs have the chance to run. The BENCHMARK queue is "not" accessible for the general users.

Name
IOQ
This is the queue to be used to run jobs on the IO node (bigmac) e.g. to stage in files from mass storage.
interactive
This is the queue in which the interactive job runs. There is a time limit of 24h after which an interactive job will be terminated.
long
This is the queue for e.g. long montecarlo  productions. There is no time limit on this queue.
medium
This is the queue for jobs with a duration of up to 8 hours.
short
This queue should be used for short jobs which last up to an hour.



How to get access to the cluster

We assume that you already have a Fermilab ID, FNALU account and kerberos principal. If not visit the web page here.
If you are set please send mail to Hans Wenzel to request a special kerberos principal. He will let you know when it has been created and installed on the farm. To get into the cluster just ssh into it. This will trigger the creation of an interactive job (batch job in the interactive queue) on one of the worker nodes of the cluster: 

ssh -t bigmac.fnal.gov +r RH7              to run on a machine with the redhat 7.x OS

ssh -t bigmac.fnal.gov +r RH6              to run on a machine with the redhat 6.x OS

if you don't care about the version of the operating system:

 ssh -t bigmac.fnal.gov



How to use the fbsng batch system:

First setup FBSNG and kerberos.
setup fbsng 
setup kerberos


Below are some examples how to start an fbsng job from the command line.

fbs exec -q  medium -n 10 -r RH7 -o /dev/null -e /dev/null /bin/sleep 100
fbs exec -q  medium -n 10 -r RH6 -o /dev/null -e /dev/null /bin/sleep 100
fbs exec -q  short -n 20  -o /dev/null -e /dev/null /bin/sleep 100

In the first example we want:
The second example is identical the only difference is that we request the job to run on nodes providing the redhat 6 resource. In the third example we don't care about the OS version and request 20 processes to run simultaneously in the short queue.

Instead of using the command line we can also use a job description file ".jdf" to define the job. This is a much more powerful way to control the batch  jobs as we will see when doing the physics examples. The following will start a batch job  in the medium queue.  It will simultaneously start 10 processes which in this case will just sleep for 100 seconds (Note! if you request more processes than  CPU's/Resources are available your job will never start). This .jdf file 'test1.jdf' below has only one section called main but a jdf file can have several section which can depend on each other.  This jdf file requires that the resource RH7 will be provided so it will only run on a redhat 7.x based system. After the job is compiled (or failed) the batch system will send a mail message with summary information to user@fnal.gov.  In this example std out and std err will be directed to files in the /storage/data/fbs-logs/ directory.
Also since the farm is shared between interactive and batch use we set the nice level of the job. Currently there is no way to set the nice level of the queue itself so we count on the users to be nice.

#
# File: test1.jdf
#
SECTION main
        EXEC=/bin/sleep 100
        QUEUE=medium
        NICE=19
        NUMPROC=10
        PROC_RESOURCES=RH7
        MAILTO=wenzel@fnal.gov
        STDOUT=/storage/data/fbs-logs/
        STDERR=/storage/data/fbs-logs/

you can submit the job with the following command:

fbs submit test.jdf

and the batch system responds with:
Farm Job NNN has been submitted...

where NNN is the job number. You can check the status of the job by typing

fbs status NNN


or just check the status of the job using the web page. (Just click on jobs and then on your job).
To kill a job:

fbs kill  NNN
 

After the job is finished you should get an email and you should find std out and std err of your job in the directory specified in the .jdf file. Note there is one file containing std out (.out) and one file for std err (.err) for each of the processes.  In this case we ran a job with 10 processes.

ls /storage/data/fbs-logs/*NNN*

The next jdf file 'test2.jdf' is a refinement of the previous one. Here we provide the number of processes that we want to run as an argument when submitting  the .jdf file. Also we don't care about which operating system we run on so we omit    PROC_RESOURCES=RH7. Instead of running a typical unix command (/bin/sleep) you can run your own executable provided the following 2 conditions are met:
#
# File: test2.jdf
#
SECTION main
        EXEC=/storage/data/wenzel/cmsim/test.csh
        QUEUE=medium
        NICE=19
        NUMPROC=$(1)
        MAILTO=wenzel@fnal.gov
        STDOUT=/storage/data/fbs-logs/
        STDERR=/storage/data/fbs-logs/


where 'test.csh',
a little shell script that counts up to 100, looks like this:

#!/usr/local/bin/tcsh -f
#
# File: test.csh
#
set i=1
while ($i<101)
  echo $i
  @ i++
end


To submit this job with 15  processes just type:

fbs submit test.jdf 15




Disk space and Mass Storage

Disk Space


A raid array is directly attached to bigmac this is nfs exported to all worker nodes and mounted as /storage/data disk.


Temporary scratch space

The batch system creates a local temporary scratch area on the node that your interactive session is running. The shell variable  FBS_SCRATCH is set to point to this disk area.

[wenzel@hotdog58 cmsim]$ echo $FBS_SCRATCH
/data1/fbsng_scratch/730.Exec.1
[wenzel@hotdog58 cmsim]$ ls $FBS_SCRATCH
[wenzel@hotdog58 ~]$ cd $FBS_SCRATCH
[wenzel@hotdog58 730.Exec.1]$ touch junk
[wenzel@hotdog58 730.Exec.1]$ ls
junk


Note this scratch area will be automatically deleted when your interactive job ends!


How to use dfarm

Currently we use dfarm, a product developed at fermilab which allows us to utilize the distributed farm disk storage. This way the data disks scattered over the farm become a manageable usable resource. Currently  ~0.7 TB of additional disk space can be used on the CMS interactive farm. dfarm is a truly distributed system which shifts the load from I/O node to worker nodes. In most cases upload is local and therefore fast and cheap (regarding computer resources).  Some properties of dfarm are:
before using dfarm do:

setup dfarm


Below are the basic UNIX file system commands operating in virtual file name space:
       
dfarm ls <path>|<wildcard>
dfarm mkdir <vpath>
dfarm rmdir <vpath>
dfarm put [-v] [-t <timeout>] [-n <ncopies>] <local path> <vpath>
dfarm get [-v] [-t <timeout>] <vpath> <local path>
dfarm rm <vpath>|<wildcard>
dfarm ln <vpath> <local path>
             

In addition there are commands specific to dfarm:
                    
    dfarm info <vpath>                         Prints where the file is stored

e.g

    dfarm info /wenzel/zprime/zprime_5010.lis

gives:
Path: /wenzel/zprime/zprime_5010.lis
Type: f
Owner: wenzel
Protection: rwr-
Attributes:
Created: Fri Jun  6 18:16:54 2003
Size: 723274
Stored on: hotdog55

Below is a table with some of the commands specific to dfarm:

dfarm info <vpath>       
Prints where the file is stored ( see above)
dfarm ping Prints list of available disk farm nodes and their load (response time,transactions)
dfarm stat <node>  
Prints status of individual farm node (disk space availability)
dfarm help  
Prints out a list of all available commands
dfarm repfile [-n <ncopies>] <vfs file> 
makes <ncopies> additional replicas of the specified file
dfarm capacity [-mMGfcu]
Prints out the current capacity of the system. If a node is down it doesn't count.
dfarm usage <user>
Prints out  how much disk space the user <user> is using

If you want to access dfarm remotely you can connect to it via kerberized ftp:

ftp bigmac.fnal.gov 2812



A physics example


Disclaimer this is only an example to get going. There might be better ways and by the time you read this there will be a newer versions of the software available. But we hope this will still be useful. All examples were created and tested on Redhat 7 based system (ssh -t bigmac +r RH7, don't forget the PROC_RESOURCES=RH7statement in the jdf file) and will not!!  work on the RedHat 6 systems. We will add this info soon. We will use some simple physics analysis to demonstrate how to:
The physics analysis we will use is a Z' with a mass of 700 GeV that we force to decay into 2 muons. (this actually might be one of the first discoveries at the LHC). We keep the example and the analysis simple since the scope here is to learn about the computing environment rather than how to do an analysis.  All example scripts (jdf files, shell scripts, root macros and header files) can be found here. First we setup the cms environment;

source "/afs/fnal.gov/ups/etc/setups.csh"
source /afs/fnal.gov/files/code/cms/setup/cshrc
setup  dfarm
setup kerberos
setup fbsng
cmsim cms125  


Now we are ready to create the CMKIN/PYTHIA executable we want to that in a directory visible on all the worker nodes.

cd /storage/data/wenzel/test     (Please change)
setenv SCRATCH .
$CMS_VROOT/examples/cmkin/kine_make_ntpl_pyt.com


this will create the CMKIN/PYTHIA executable in your current directory.
kine_make_ntpl_pyt.exe

Now we are ready to produce some Monte Carlo data using the provided script. Before submitting any job we want to make some changes to the scripts. There are 3 files. The first file is the job description file test_mc.jdf. We obviously have to change the stuff hi lighted in red.

SECTION main
        EXEC=/data/wenzel/test/test_mc.csh  5000 1000
        QUEUE=medium
        NUMPROC=$(1)
        PROC_RESOURCES=RH7 disk:5
        MAILTO=wenzel@fnal.gov
        STDOUT=/data/fbs-logs/
        STDERR=/data/fbs-logs/


This .jdf file calls test_mc.csh with 2 arguments the first is the run number of the first file produced the second is the number of events per file. Again the lines in red have to be changed to point to the right place and to change the arguments to the run number and number of events you want.
Below is the listing of test_mc.csh.

#!/usr/local/bin/tcsh -f
#
############################################################################
#
#
# This is an example which generates  Z primes  and forces them to decay into
# a mu+ mu- pair. (if you want to look at other channels this is easy to change.
# author: Hans wenzel
# wenzel@fnal.gov
#
#
############################################################################
#
#------------------------------------------------------------------------------------
# Please modify the following 2 lines according to your own needs:
# first point to the command file that controls cmkin
set EXE_SCRIPT=/data/wenzel/test/zprime.csh
# second point to the directory in the dfarm vfs where you want to store the output. 
set DFARM_DIR=/wenzel/zprime_new2/
#------------------------------------------------------------------------------------
setenv PATH /bin:/usr/bin:/usr/local/bin:/usr/krb5/bin:/usr/afsws/bin
/usr/krb5/bin/aklog
source /usr/local/etc/setups.csh
source /afs/fnal.gov/files/code/cms/setup/cshrc  
echo  $FBS_SCRATCH
cd $FBS_SCRATCH
set current=0
@ current = ($FBS_PROC_NO + $1)
setup kerberos
setup dfarm
cmsim cms125
$EXE_SCRIPT $current $2
/bin/ls
#
# since I forgot how to deal with PAW let's convert the ntpl files into .root files
# so we first set up the environment for root and then we use h2root to convert
# the ntuple files
#
setenv ROOTSYS /afs/fnal.gov/files/code/cms/ROOT/3.05.00/i386_linux24/gcc-2.95.2
setenv LD_LIBRARY_PATH ${ROOTSYS}/lib:.
set path = ( $path $ROOTSYS/bin)
foreach i (*.ntpl)
   h2root  $i  $i:r.root
end
#
# finally store the .root files in dfarm.
#
foreach i (*)
   dfarm put $i  ${DFARM_DIR}/$i
end


Finally this calls zprime.csh which controls the execution of CMKIN/PYTHIA:

#! /usr/local/bin/tcsh -f
#
############################################################################
#
# CMKIN run script (to create a HEPEVT ntuple)
# This is an example which generates  Z primes  and forces them to decay into
# a mu+ mu- pair. (if you want to look at other channels this is easy to change.
# author: Hans wenzel
# wenzel@fnal.gov
#
#
############################################################################
#   
echo $0  ' now executing'
echo $#
set nparm = $#      
if (! ($nparm == 2) ) then
  echo 'you did not enter the right number of arguments'
  echo ' this script requires 2 arguments:'
  echo ' 1. run number'
  echo ' 2. number of events'
  exit 1
endif
#
set runnum  = $1
set events  = $2
#
# change the scratch variable to point to the directory where your
# cmkin executable resides
#
set SCRATCH=/data/wenzel/test
set JOBNAME=kine_make_ntpl_pyt
set EXEFILE=${SCRATCH}/${JOBNAME}
#
if ( ! -f ${EXEFILE}.exe ) then
  echo ' '
  echo The file ${EXEFILE}.exe does not exist
  echo Run the script ${JOBNAME}.com first
  exit 1
endif
#
if (-f zprime_${runnum}.lis) mv zprime_${runnum}.lis zprime_${runnum}.lis_old
echo
echo 'Starting execution of  ' $EXEFILE.exe
echo on `date`
${EXEFILE}.exe > zprime_${runnum}.lis <<EOF
LIST
C
C-------- Start of channel independent control cards ----------------------
C
C CMS energy (GeV)     
C
  ECMS 14000.            
C -----------
C No. of events to generate 
C
  TRIG    99999                   
C -------------
C No. of events selected (written out)
C
  NSEL    100                            
C -----------
C particle masses               (not always needed)
C --------------
C
C PMAS   6,1=175.             ! top quark
C PMAS  23,1=91.187           ! Z
C PMAS  24,1=80.22            ! W
C
  MSTJ  22 = 2                ! Decay those unstable particles
  PARJ  71 = 10.              ! for which c*tau < 10 mm
C
C Pythia/JETSET parameters
C ------------------------
C
C First set random seed
C
  MRPY 1= 123456
  KSEL 0
C
  CFIL 'EVTO',  'zprime_${runnum}.ntpl '
C
C --------------
C Set RUN number
C --------------
C
  KRUN $runnum
C
C do not  use  PDF library (would be MSTP 52=2)
C
  MSTP 51=7                 ! CTEQ 5L in pythia 6.1
C
C General parameters
C ------------------
C
  MSTU 21=1  ! Check on possible errors during program execution
  MSTJ 11=3  ! Choice of the fragmentation function
C
C general QCD parameters
C
  MSTP 81=1       ! multiple parton interactions (1 is Pythia default)
  MSTP 82=4       ! multiple parton interactions (see p209 CERN-TH 7112/93)
  MSTP 2=2        ! second order running alpha(s)
  MSTP 33=3       ! K-factor in alfas scale: alfas -> alfas(parp(33)*Q**2)
  PARP 82=3.20    ! pt cutoff for multi-parton interactions
  PARP 89=14000.  ! sqrt(s) for which PARP(82) is set
C
  TRIG 100000
  NSEL $events
C
C
C-------- End of channel independent control cards -----------------------
C
C-------- Start of channel dependent control cards -----------------------
 PMAS  32,1=700.             ! set Z' mass to 700 GeV/
C
C Select Z' production sub-processes
C ----------------------------------
C
  MSEL 0                       ! = 0 for user specification of sub-processes
C
  MSUB 141=1                   ! ff -> gamma z^0 /Z^0'
  MSTP  44=3                   ! only select the Z' process
                               ! (see page 155 section 9.3 of pythia manual)
C
C-- Force Z0' to decay uubar,ddbar,ssbar,ccbar.
C-- Z0 decays would start at (174)
C
  MDME 289,1=0           !d dbar                off
  MDME 290,1=0           !u ubar                off
  MDME 291,1=0           !s sbar                off
  MDME 292,1=0           !c cbar                off
  MDME 293,1=0           !b bar                 off
  MDME 294,1=0           !t tbar                off
  MDME 295,1=0           !4th gen Q Qbar        off
  MDME 296,1=0           !4th gen Q Qbar        off
  MDME 297,1=0           !e+ e-                 off
  MDME 298,1=0           !neurtino e+ e-        off
  MDME 299,1=1           !mu+ mu-               on
  MDME 300,1=0           !neutrino mu+ mu-        off
  MDME 301,1=0           !tau+ tau-               off
  MDME 302,1=0           !neutrino tau+ tau-      off
  MDME 303,1=0           !4th generation lepton   off
  MDME 304,1=0           !4th generation neutrino off
  MDME 305,1=0           ! W+ W-                  off
  MDME 306,1=0           ! H+  charged higgs _ ???? off
  MDME 307,1=0           ! Z            +  ???? off
  MDME 308,1=0           ! Z            +  ????  off
  MDME 309,1=0           ! sm higgs + ???? off
  MDME 310,1=0           ! weird neutral higgs HA off
EOF
#
echo
echo  ------------------------------------------------------
echo  $0 finished on `date`
echo  ------------------------------------------------------
echo
############################################################################



Now we are ready to submit our job into the queue. The following command will  request 12 simultaneous processes.

fbs submit test_mc.jdf 12

So with the setting as in the listings above this should create 12 .root , .ntpl etc. each with 1000 events.

wenzel@hotdog58 ~]$ dfarm ls /wenzel/zprime_new2
frwr-   1 wenzel                 228063 06/13 18:32:25 zprime_5001.lis
frwr-   1 wenzel               44908544 06/13 18:32:26 zprime_5001.ntpl
frwr-   1 wenzel               21462204 06/13 18:32:27 zprime_5001.root
frwr-   1 wenzel                 228063 06/13 18:29:34 zprime_5002.lis
frwr-   1 wenzel               44908544 06/13 18:29:54 zprime_5002.ntpl
frwr-   1 wenzel               21462181 06/13 18:29:55 zprime_5002.root
frwr-   1 wenzel                 228063 06/13 18:29:34 zprime_5003.lis
frwr-   1 wenzel               44908544 06/13 18:29:34 zprime_5003.ntpl
frwr-   1 wenzel               21462067 06/13 18:29:35 zprime_5003.root
frwr-   1 wenzel                 228063 06/13 18:29:50 zprime_5004.lis
frwr-   1 wenzel               44908544 06/13 18:29:50 zprime_5004.ntpl
frwr-   1 wenzel               21462179 06/13 18:29:52 zprime_5004.root
frwr-   1 wenzel                 228063 06/13 18:29:38 zprime_5005.lis
frwr-   1 wenzel               44908544 06/13 18:29:39 zprime_5005.ntpl
frwr-   1 wenzel               21462196 06/13 18:29:39 zprime_5005.root
frwr-   1 wenzel                 228063 06/13 18:29:39 zprime_5006.lis
frwr-   1 wenzel               44908544 06/13 18:29:39 zprime_5006.ntpl
frwr-   1 wenzel               21462241 06/13 18:30:01 zprime_5006.root
frwr-   1 wenzel                 228063 06/13 18:29:32 zprime_5007.lis
frwr-   1 wenzel               44908544 06/13 18:29:32 zprime_5007.ntpl
frwr-   1 wenzel               21462184 06/13 18:29:33 zprime_5007.root
frwr-   1 wenzel                 228063 06/13 18:32:30 zprime_5008.lis
frwr-   1 wenzel               44908544 06/13 18:32:30 zprime_5008.ntpl
frwr-   1 wenzel               21462224 06/13 18:32:32 zprime_5008.root
frwr-   1 wenzel                 228063 06/13 18:29:34 zprime_5009.lis
frwr-   1 wenzel               44908544 06/13 18:29:35 zprime_5009.ntpl
frwr-   1 wenzel               21462186 06/13 18:29:35 zprime_5009.root
frwr-   1 wenzel                 228063 06/13 18:29:36 zprime_5010.lis
frwr-   1 wenzel               44908544 06/13 18:29:38 zprime_5010.ntpl
frwr-   1 wenzel               21462057 06/13 18:30:00 zprime_5010.root
frwr-   1 wenzel                 228063 06/13 18:31:04 zprime_5011.lis
frwr-   1 wenzel               44908544 06/13 18:31:04 zprime_5011.ntpl
frwr-   1 wenzel               21462175 06/13 18:31:06 zprime_5011.root
frwr-   1 wenzel                 228063 06/13 18:29:39 zprime_5012.lis
frwr-   1 wenzel               44908544 06/13 18:30:19 zprime_5012.ntpl
frwr-   1 wenzel               21462241 06/13 18:30:20 zprime_5012.root

We want to use root to analyze the data that we produced. In this simple example we calculate the invariant mass, of the dimuon and create pt and rapidity distributions.  First copy the .root file out of dfarm and setup the root environment (use the same root version as used by ORCA)

dfarm get /wenzel/zprime_new2/zprime_5001.root .
dfarm get /wenzel/zprime_new2/zprime_5002.root .
dfarm get /wenzel/zprime_new2/zprime_5003.root .
dfarm get /wenzel/zprime_new2/zprime_5004.root .
dfarm get /wenzel/zprime_new2/zprime_5005.root .
dfarm get /wenzel/zprime_new2/zprime_5006.root .
dfarm get /wenzel/zprime_new2/zprime_5007.root .
dfarm get /wenzel/zprime_new2/zprime_5008.root .
dfarm get /wenzel/zprime_new2/zprime_5009.root .
dfarm get /wenzel/zprime_new2/zprime_5010.root .


setenv ROOTSYS /afs/fnal.gov/files/code/cms/ROOT/3.05.00/i386_linux24/gcc-2.95.2
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:.:${ROOTSYS}/lib
set path = ( $path $ROOTSYS/bin )



root [0] .L h101.C
root [1] .x plot.C
root [2]

will produce the plots below. The first row shows the invariant mass and pt distribution of the Z'. row 2 and 3 show the pt, rapidity and pseudo rapidity distribution of the mu+ and mu- respectively.  Note the plots were obtained from a previous run  with more statistics than in the example files.


results


To create h101.C and h101.h I used the root MakeClass() function and then added the histograms to be filled in the loop.
If you want to create your own class here is how it is done:
Following the procedure below the following files are produced: h101.h and h101.C
The generated code in h101.h includes the following:
For more information visit the ROOT site: http://root.cern.ch/

Below is an example of how to use MakeClass for one input file:

root [0]  TFile *f = new TFile("zprime_5009.root");
root [1] f->ls();
TFile**         zprime_5009.root        HBOOK file: zprime_5009.ntpl converted to ROOT
 TFile*         zprime_5009.root        HBOOK file: zprime_5009.ntpl converted to ROOT
  KEY: TTree    h101;5  HEPEVT
  KEY: TTree    h101;4  HEPEVT
root [2] h101->MakeClass()
Info in <TTreePlayer::MakeClass>: Files: h101.h and h101.C generated from Tree: h101
(Int_t)0



similar if you want to analyze several files simultaneously you can chain them together and then create the classes for the chain ( the first 10) in the following way.

TChain h101("h101")
h101.Add("zprime_5001.root")
h101.Add("zprime_5002.root")
h101.Add("zprime_5003.root")
h101.Add("zprime_5004.root")
h101.Add("zprime_5005.root")
h101.Add("zprime_5006.root")
h101.Add("zprime_5007.root")
h101.Add("zprime_5008.root")
h101.Add("zprime_5009.root")
h101.Add("zprime_5010.root")
h101.Print()
h101.MakeClass()


 



Frequently asked Questions

Q: Every now and then my terminal gets screwed up and it is impossible to work on it.
A: probably the window was resized try issuing the :
 eval resize or just resize command. 

Q: I can't ssh into another kerberized machine, what's going on?
A: Probably the problem is the kerberos principle. The batch system obtain a special farms principal which can't be forwarded
Probably the output of klist is similar to this note that the default principal is user/cms/farm@FNAL.GOV instead of user@FNAL.GOV

 [user@hotdog58 cmsim]$ klist
Ticket cache: /tmp/.fbs_k5cc_730.Exec.1
Default principal: user/cms/farm@FNAL.GOV

Valid starting     Expires            Service principal
06/12/03 12:23:07  06/13/03 14:23:07  krbtgt/FNAL.GOV@FNAL.GOV
        renew until 06/16/03 12:23:05

To get your principal you have to do:

kinit user