Production at BNL
Reconstruction Production
See slides from the Physics discussions talk.
Script to run your personal mini production
To run production you can use the following script
cvs -d /afs/usatlas.bnl.gov/project/localcvs co -dscripts fisyak/scripts/athena_job
scripts/athena_job -h
============================================================
Usage: scripts/athena_job [arguments]
-h Show this message and exit.
-i {input_file} Set input file name (to be passed to magda_findfile)
default is dc1.002002.lumi02.0346.hlt.pythia_jet_55.zebra
-j {jobOption } Set jobOption file
default is /afs/usatlas.bnl.gov/users/fisyak/public/prod/6.0.3/eg9.lumi02.603.job
-n {NoEvent } Set NoEvent to process
default is 1000
-o {output_file} Set output file
default is /usatlas/scratch/fisyak/results/dc1.002002.lumi02.recon.009.0346.hlt.pythia_jet_55.eg9.603.hbook
-s {script } Set script to run
default is /afs/usatlas.bnl.gov/users/fisyak/public/prod/6.0.3/recon.gen.v6.with603.bnl
-w {workdir } Set workdir file
default is /usatlas/scratch/fisyak
[name]=[val] Sets [name] to value [val] in the ARG hash
(including above list of parameters and
environment variables)
============================================================
Job default environment variables
ATLAS_ROOT = /afs/usatlas.bnl.gov
CMTCONFIG = i686-rh73-gcc295
CMTROOT = /afs/usatlas.bnl.gov/cernsw/contrib/CMT/v1r13
CMTSITE = BNL
CMTVERS = v1r13
============================================================
This script does for you the following:
- it defines necessary environment variables if they are not defined;
- [-w] it sets up a working directory workdir (default is /usatlas/scratch/$USER);
- [-i] it fetches input_file via magda;
- [-o] it sets output_file;
- [-s] it runs script which has to setup ATLAS environment (via cmt) and all necessary data files;
- [-j] with jobOption file
- [-n] for NoEvent events
- if run would be successful then output_file will be stored,
input_file will be released,
and workdir will be cleaned and removed.
2001.lumi10 production with Release 6.0.4
For the production it used the following directory structure
(please note that all these directories are group writable)
/usatlas/data04/fisyak/Production/6.0.4 : top level directory for 6.0.4 production
Under the above directory there are
directories with
/scripts : scripts
/jobs : generated scripts
/archive : submitted scripts
/log : latest log files before parsing
/log/done : parsed log files, information for jobs is in DB
/ntuple : output hbook files
/ntuple/bad : hbook files which have problems (after running Validata.pl script)
Top level working directory is /usatlas/data04/dc1/work/6.0.4
Scripts are in cvs
cvs -d /afs/usatlas.bnl.gov/project/localcvs co -dscripts fisyak/scripts
In directory scripts there are
- a package MySQLTable.pm with MySQLTable class which provides handling DB records in Production.jobdata table and scripts
- makeList.csh to create a list of input zebra files from magda
makeList.csh dc1.002002.lumi02 | sort -u > zebra_files.list
- Registrate.pl create entry for output LFN in jobdata table from list
Registrate.pl zebra_files.list
- makeJobs.pl (should be run in jobs directory)
generate jobs form given selection (I edit selection by hands)
...
my $fetch_query = "SELECT * FROM jobdata WHERE (status='ABORTED' OR status='FAILED') AND release = ? AND dataset=?";
...
cd ../jobs
../scripts/makeJobs.pl
- lsf.csh (in jobs directory)
submit jobs to lsf (keep partition number in job name)
cd ../jobs
../scripts/lsf.csh dc1*
Jobs can be run on one per node. User account fisyak has limitation for acas_long queue one job per node.
at_cas_prod queue has this limitation for all user allowed to use it (atlreco can use this queue)
- JobStatus.pl: Set job status via bjobs
- UpdateDb.pl: parse log files and update status of finished jobs
UpdateDb.pl../log/*.log
- Validata.pl:
Validate hbook files (in scripts directory)
Yuri Fisyak
Last modified: Thu Jul 10 10:06:49 EDT 2003