TAMU Collider Group webpage



EMTiming System
Calibration Pages
(by Peter Wagner)


Transition Board corrected time - W electrons schematics





 
  1. Maintenance of the EMTiming system
    1. Verify a broken channel
    2. Procedures before Access in the Detector Hall
    3. Test Procedures
  2. How to make calibration ntuples
  3. How to produce calibration tables
  4. Putting Calibration Tables into the Database
  5. Validation
  6. External Links


Analysis Archives: ./.




How to make calibration files

Goal is to produce the calibration tables as shown in here in version 2 for EMTiming threshold and slewing curves for each channel. The tables are fit parameters derived from the slewing and threshold curves here that are produced for each set of runs.

Tools:
  1. Package: CalorTimeMods
    Executable: 
    timeMaker (produces root file with ADC counts and raw/corrected times for each PMT, tower for each event)
    Modules: AsdThresholdMod, SlewingMod 
    (to produce the fits to the threshold/slewing curves)
  2. Executable: myexe (produces root file with plots of slewing and threshold curves) in the directory $calibs=fcdflnxX:~wagnp/cdfsoft/calibrations/ - the files as part of the automation procedure are in Daniel's directory





Initialization of the Calibrations:

When the time comes for a new set of calibrations 
Daniel's automization sends out an email with a link to a newly created webpage, e.g. this one, to facilitate the check whether the calibrations are done correctly for each channel. This webpage is the result of the automatic execution of all the steps below. If all looks fine on this page, there is usually no need for further investigations and an approval can be sent to Daniel.
Each set of calibrations is linked from the calibration main page which
shows the status of the validation progress: http://www-cdf.fnal.gov/~danielw/Calibrations/Status.html
Even though the results are readily available, in many cases one has to redo the calibration process manually to fix problems. In this case the steps below have to be followed.






timeMaker is really a part of the automation procedure that Daniel implemented. Nevertheless I will briefly describe the essence of it as it is helpful for debugging bad channels indicated by ObjectMon. For the automization part one can skip this section.


Select the run range for timeMaker:
Between 2 shutdowns or such; should be a consistent data-taking period



Select data in the run range
for timeMaker:
  • Trigger-table = "Physics"
  • beam-data, b-stream 
  • take as much data as possible; disregard runs with <10nb-1 (requirement to be processed by production)
  • disregard clock shifts; those are corrected later in the run-by-run corrections
a list of runs.



How many job segments to submit for timeMaker:

Calculate the luminosity per run from the database query:

and sum the values for all runs in the runlist. Max runs 1 job segment per store. Shorten the runtime by limiting the statistics at low energies one has to apply the following switch in the tcl:

(Max's current settings:)
cutLowEnergies set true
maxEntryCut set 10000

Currently, Daniel produces one file per runnumber in the automization. Because of this one needs to copy Daniel's files from his remote location and concatenate them to one file per store manually. This is shown in the next sections.






How to get and prepare Daniel's files
:


In Daniel's email the full path to his files is described. Access the files via e.g.  cdfopr@fcdfdata125.fnal.gov:/export/data1/cdfopr/current/streamb/  (with user "cdfopr" - one needs an account - and machine "fcdfdata125"). One can run the get_files.sh script to copy the files to local. This script needs a file.list file with a list of the remote files. To create a file.list file form the location that's specified in Daniel's email, do:
ssh cdfopr@fcdfdata125.fnal.gov "ls /export/data1/cdfopr/current/streamb/ | grep emtiming" > files.list
sed 's/^/rootd:\/\/fcdfdata125.fnal.gov\/\/export\/data1\/cdfopr\/current\/streamb\//' files.list > files.list_ ; mv files.list_ files.list

[this might not be necessary: Then only select the files for the run range as specified in Daniel's mail by looking at the filenames: emtimingntuple_bb[run number in hex format].[file number in hex]calb.root ]

Then one has to concatenate.


How to conatenate the files:


Here a short description of how I merged Daniel's file to one file per store:
To get the files for each store copy and paste from http://www-cdfonline.fnal.gov/java/cdfdb/servlet/TevStore?START=0

or execute:
./create_files_stores_list.pl 217990 219479 > files.stores.list

Copy Daniel's files to RootFilesEm/ and cd there and run:
../concatenate_ntuples.pl ../files.stores.list >& concatenate.log &
where
files.stores.list is a text file with runnumbers for each store as described in concatenate_ntuples.pl.







How to produce calibration tables


Once we have one ntuple file per store on the local disk one can start and select the run ranges.



Mark broken channels:

Broken channels are kept in the directory 
$calibs/CalibFiles. To mark a channel with geomid e.g. 345 as broken starting at runnumber e.g. 191000, create and edit the file TdcCemBChannels_190999.dat so that the contents shows "345 2" , if PMT 1 is broken, "345 1" if PMT 0 is broken or "345 3" if both PMTs are broken. If there are no broken channels, edit the contents to be "0 0". Known broken channels are shown on the bad channels log page for instance. This page should be kept up-to-date whenever a channel goes bad.


Preparations for the Calibrations:

Find consistent run ranges to produce the calibration tables. This range can be restricted only by shutdowns or a status change of a channel. First, split the run range according to the bad channels log file in the way shown above.

A helpful tool to check the run ranges and to check if there are other bad channel candidates is 
in the directory $calibs/RunRangePreparation/. Execute command e.g.:
./MassProduceCalibs CardFiles/cardFile_cemthresh massprod_cem /cdf/home/wagnp/cdfsoft/calibrations/RootFilesEm/emcal0001_218 [begin run] [end run] >& job.log

Make sure that you also set the run range in the card file.
It produces one root file per run, one Threshold.dat file per run and a log file job.log. The output of the .dat files is interesting as it shows the efficiency, threshold and width for each run. The script ScanTabChanges which is run on those .dat files shows if any of those turn-on curve parameters change, and if yes at which runnumber. This determines the run ranges that I can use for each set of calibrations. Execute command:
./ScanTabChanges [directory with the threshold tables from MassProduceCalibs] [file pattern] [begin run] [end run] [parameter]
e.g.
./ScanTabChanges ./Tables TimeThresholds-CEM 217990 222426 12 >& job_ScanTabChanges_12.log &

where "parameter" is either "0", "1" or "2", depending on whether we want to examine efficiency, threshold or turn-on width. (Also combinations of the numbers are possible, but never used; the most telling parameters for a broken channel are 1 and 2). If one runs with the slewing card file, parameters 0,1,2 are the corresponding slewing fit parameters. The root files contain the turn-on curves and slewing curves, respectively, for each store.  Note that 1_5_8 and 1_5_9, the chimney, always show up as bad.
The log file contains the interesting output: a "g" ("good") means that the parameter for that run lies within the expected range which is determined from the Gaussian distribution of this parameter from all other channels (and vice versa for "b," "bad"). The mean and RMS of the overall gaussian is shown in the first line. A channel is suspicious if it has a series of only "g" and then a series of only "b." In contrast, if a channel has a series of "g" and a few "b" interspersed, it is usually a fluctuation and nothing to worry about. In doubt, check the turn-on curve of this channel for the runs indicated!

When testing how a bad channel setting one must at least run on 6 calibration tables!

The meaning of the parameters in the calibration tables can be looked up in the CalibModsBase file.


If this procedure finds a channel that is not already marked bad a further investigation is required!
One has to debug in the usual way using threshold and energy distributions.





Calibrations Procedure:


Threshold and Slewing calibrations are produced simultaneously with the command 
./MakeTables both [det] [begin run] [end run] This script executes myexe in which the slewing calibrations are done in two iterations:
  1. Energy only correction: this iteration produces cemslew0*.root
    The asymmetry histograms are not filled, and there is no asymmetry correction applied.

  2. Energy correction, with asymmetry correction applied: this iteration produces cemslew1*.root
    Asymmetry corrections are produced with use of the energy only corrections from the first step. With this asymmetry correction applied, the slewing curves are once more produced and fitted to yield the final slewing calibrations. After this iteration the slewing tables are produced.
The beginrun and endrun in the respective tcls is set to the arguments given.
There is a script that automatizes this: run.sh. It is executed via ./run.sh files.list where "files.list" is the list of emtimingntuple* files in the RootEmFiles/ directory after concatenating and copying from the remote location. Usually I execute 
MakeTables by myself. Note that executing MakeTables thresh and MakeTables slew separately does not work.
Important note: Copy the produced slewing tables and other files into a different directory before running on a new run range. Otherwise the asymmetry values get overwritten!

For
evaluating the calibrations (see below) it is very useful to produce "monitoring" files. For this execute e.g. for the CEM ./myexe CardFiles/cardFile_cemmon cemmon where the run range is specified in the card file.




How to decide if the calibrations were OK:


in cemslew0:

Residials-2D:
=============
- shows the difference between fit and data points; shows average difference over all energy bins
=> resolution = ~1.6ns, so if average is around that number then it is suspicious
=> statement about the global fit quality for each tower

Fit-Chi2-2d:
============

=> statement about the global fit quality for each tower, more detailed statement about each data point


cemslew1:

no plots to look at really.


cemmon:
[for this one has to run myexe manually...]

AllHit Time:
============

- shows the corrected time after slewing corrections

and all other ObjectMon style plots


Procedure in case of problems:

If there is an unexpected problem indicated for a channel by the above plots, e.g.
the residuals of the slewing curves "oscillate", but the turn-on curves for each run looks fine, apply the usual procedure for investigating broken channels using turn-on curves etc. The first thing to check however is for which run range the slewing curve for this channel looks fine!





Putting Calibration Tables in the Database


  • Make sure that HAD and EM tables have the same total run range, even if the ntuples range differently!

The files for database-committing are all in 
wagnp@fcdflnx6:~/calibdb/.

Commit tables into the database:

Commit everything into online production - offline replicates this information

The tables to be committed should be put into the directory hardcoded in "TablesCommiter"; this is currently:
my $remoteSite = "maxi\@txpc6.fnal.gov";
my $remoteTabd = "/data09/maxi/CalibTables/TableFiles/";

  • "version number": if argument is -1 then for the first time version is 1 and gets bumped up by the script every time the table of the same run range is updated
  • "status" = complete: choose complete always; not sure what "good" and "raw" means
  • "algorithm" = 2: this is the algorithm used in NtupleMakerMod
makeTABLE.cc:
"run"=beginning of the run range
"version"=2
"algorithm"=2

"cid": unique number associated to each table


Execute:
./TablesCommitter cdfonprd v2 [calibVersion] [algorithm] [status]
(e.g. ./TablesCommitter cdfonprd v2 1 2 COMPLETE)
This figures out which tables have to be committed> It asks if you want to proceed: if you choose "no", then it aborts; if "yes", it starts committing them!
(output in the .C file)
  • Make sure that each line in the printout is the same as in the text table! If this is not true, then the used set cannot be created with Max's scripts!
  • Write down "cid" number! (this is necessary only for the first table)
  • For the first table only, e.g. CEM: do "write time = y"; for all the rest choose "no"
  • The produced log file "commited_cdfonprd_COMPLETE_2" contains the time stamp of the commit
  • If there are problems then commit the table by hand; in this case make sure that the version number is bumped up by one; also, contact Max
to do:
  • make the script check that all begin-runnumbers are the same!


Create valid sets for all tables

This valid set has to contain N links where
N = (no. of different detectors = 5) * (no. tables per detector = 2)

We need to create M valid sets, where M is the maximum number of tables per detector and table-mode, e.g. if the PEM slewing table is split up into 3 sub-runranges (due to broken channels,...) we need 3 valid sets.

Then execute:
./ValueSetMaker_new cdfonprd 2 COMPLETE 1

It goes through all committed tables for which no valid set exists yet, with the version number as specified in the argument of 
./ValueSetMaker or higher! => hit "no" when asked "want to write tables into database"
  • Check ALL cid numbers and compare them to the log file commited_cdfonprd_COMPLETE_2! If there are more than one valid set to be created it starts with the lowest runrange. (that's why one has to produce the same runnumbers for all tables as said above!)
  • Then respond "yes" to "write the table" and write down the jobset number (unique identifier for each valid set!)
  • Give the jobset numbers and cid's to the offline group
  • Keep the log file "ValidSet_cdfonprd_2_222426_COMPLETE"


Create used_sets

Each used_set contains a runnumber and a pointer to the valid_set.

Input: begin and endrunnumber for the first runrange and the number of physics-runs in this run range(!)

Produce the used_sets by executing
./UsedSetMaker cdfonprd [version] [begin runnumber this range] [beginrunnumber next range]

e.g.
/UsedSetMaker cdfonprd 1 228664 230622
if e.g. it was the first version that was committed.
-> In the prinout one sees that this script also figures out the number of runs: it prints all runs, the jobset:runnumber combinations

If all looks fine then skip all jobset:runnumber pairs until you get to the jobsets that haven't been used for making used_sets yet. then press "Y" to create used_sets


Make an entry with the created jobset numbers

This is done at the calibration database elog.







Maintenance of the EMTiming system

The basics of the system maintenance are shown in CDF note 7479. Here I will present some additional tips that are not included in this CDF note.

x) How to remap a TDC channel
xx) How to test ObjectMon online
xxx) How to test repaired channels
iv) How to update reference histograms in ObjectMon
v) How to take clock shifts into account in the ObjectMon tcl
vi) How to change the bad channel reference file for ObjectMon






  External Links:


Welcome to your any comments or questions, please e-mail to us
Last update by Peter Wagner:

email
| Photon Group | VEGY Group | SUSY Group |
| Exotics Group | CDF|