Reduction of Photometric Telescope Data for SDSS


This document will hopefully lead you through the official reduction process of photometric telescope data. Currently, this reduction is still problematic, so one can benefit from the experience gained by people already experienced with the reduction process. Some individuals to consult/commiserate with are:
Bruce Greenawalt - responsible for maintance of this tutorial page and automation scripts. He also has some experience with reducing both APO20 and USNO40 data.

Douglas Tucker - mtpipe cordinator. He is responsible for much of the MT pipeline and has extensive knowledge concerning the reduction of all MT data.

J. Allyn Smith - oberver on USNO telescope. He has extensive experience reducing USNO data, which has some similarities to the APO20 data.


Monitor Telescope Pipeline Tutorial.
Monitor Telescope Pipeline Home Page.


Basic steps in reduction sequence

0). Some background about PT data reduction
1). check observing log
2). edit mdReport file
3). spool data from tape
4). run preMtFrames
5). QA of bias and flat images
6). run mtFrames
7). QA from mtFrames
8). run excal
9). QA from excal
10). run koGenMTGSC on sdssdp1
11). run kali
12). QA from kali

Automation scripts in reduction sequence

13). mtFramesPrepare - replaces steps 2 & 3
14). submit mtFrames - replaces step 5
15). submit mtKali - replaces steps 9 & 10



0). Some background about PT data reduction

Throughout this document, the mjd 51318 is used as an example. You should replace 51318 with the mjd of the night you are reducing. Similarly, that nights data came from tape JL1136. You should replace JL1136, with the tapelabel of the tape containing the data you are reducing.

There are several useful things to keep in mind concerning the official processing of Photometric Telescope data.

To process a night's data, one must know the mjd (Modifiied Julian Date) of the night in question, which tape(s) contain(s) the data of interest, and which telescope was used for the observations.
It is assumed that the mjd is known from the outset.

Determining the tape which contains the data for that night is a simple process and can be determined in several ways. One may manually inspect the tapelogs in the directory /data/dp3.a/mt/tapelogs or check the mailing list, here. To make things easier, one can use some dp procedures to find the correct tape.

This document will deal primarily with data taken on the 20" telescope at Apache Point Observatory (APO). This telescope is referred to as the apo20.

To make things simpler in the descriptions below, it will be assumed that mjd51318 is the night's data that we wish to reduce. This data was restored from tape JL1136. When reducing a different night's data, one should simply replace 51318 and JL1136 with the desired mjd and tapelabel.

The data is read from tape into directories, which will be referred to as "spool" directories. There are two sets of spool directores, one on sdssdp2 (/data/dp2.h/mt/apo20/spool/) and one on sdssdp3 (/data/dp3.a/mt/dp/apo20/spool). Normally, the apo20 data is read into the space on sdssdp3. In this space, there is a subdirectory for each tape containing data. And inside these subdirectories, there are further subdirectories for the nights of data on that tape. Note that these date subdirectories are just the dates, not preceeded by mjd.
The data for mjd51318 was read in from tape JL1136. Therefore the raw data can be found at /data/dp3.a/mt/dp/apo20/spool/JL1136/51318.
sdssdp3> pwd
/export/data/dp3.a/mt/dp/apo20/spool/JL1136
sdssdp3> ls
51317  51318

After processing, the results are written into the "run" directories. The run directory is on sdssdp3 (/data/dp3.a/mt/dp/apo20/run). In the run directory, there are subdirectories for each tapelabel. There is also the directory "listed_by_mjd", which provides symbolic links to the tapelabel directories. Therefore, one can get to the run data without remembering the tape containing the data.
 
sdssdp3> pwd
/export/data/dp3.a/mt/dp/apo20/run
sdssdp3> ls
JL0903           JL1054           JL1160             opFiles_temp
JL0906           JL1110           JL1161             secondary_patches
JL0907           JL1127           listed_by_mjd      temp_log
JL0921           JL1131           mtreports_cleaned
JL1027           JL1136           mtreports_raw
There are a few automation scripts included in the dp product which make some steps of the reduction easier. These are detailed below. However, it is suggested that these should be used after one is comfortable with reducing the data in the method detailed below. Therefore, one has a better sense of how to handle strange situations with the data that are sure to come up. In addition, one will hopefully be able to separate problems in the automation scripts from problems with the data.


1). check observing log

Check the night's observing log on the sdss-ptlog mailing list. This will give you some basic information about the quality of the night's data. You should probably print out this log as you will need it to help edit the mdReport file below. Also you want to keep a hardcopy trail of the mtpipe processing of the night's data.

Connect directly to the mailing list here.


2). edit mdReport file

One needs to edit the mdReport file to fix some minor problems. This is a somewhat tedious process, but is key to the successful reduction of PT data. For this step of the reduction, having direct access to the observing log is helpful. You will want to compare the mdReport file with the observing log to look for any differences.

A) First make a copy of the mdReport file that is originally stored in /data/dp3.a/mt/dp/apo20/run/mtreports_raw.
sdssdp3> cd /data/dp3.a/mt/dp/apo20/run/mtreports_raw
sdssdp3> cp mdReport-51318.par mdReport-51318a.par 

B) One should now run the clean_report procedure in the mtpipe product on the copied mdReport. This procedure will correct many of the problems with the mdReport. Some work will still need to be done by hand, but it is much more manageable.
sdssdp3> setup mt
sdssdp3> mtpipe
mtpipe> clean_report mdReport-51318a.par

C) There have been problems with the image numbers in the mdReport file not matching the actual image numbers. In the past, the image numbers in the mdReport file have been too low by one (1). In other words, if the u band image of a standard star is listed as image 53970 in the mdReport file, then the u band image of that star is actually image 53971. Although this problem has occurred in the past, it may be fixed in more recent data. Therefore, one should check to see if this problem exists in the nights data being reduced. The image number is the sixth columnd in the mdReport file.

If the data has been spooled onto disk, then the best way to check is display the image of the first flat field image of the morning. If the image looks more like a star field, then the image numbers are probably off. You will probably need to look at a few images to check how much the image numbers are off and by how much. Once you determine the amount the image numbers are off, you need to add this number to each image number in the mdReport file. A short tcl script exists in the mtreports_raw directory to make this job easier. The procedure in this script simply reads in the parameter file as a chain, adds the desired number to each exposure, then writes the chain out as a new parameter chain. The procedure expects an mdReport of the format, mdReport-51318a.par and will output a new file named, mdReport-51318b.par. One must be in the mtpipe product to properly use this procedure. The following will add one (1) to each exposure for night 51318:
mtpipe> source mdFix.tcl
mtpipe> mdFix 51318 1

D) The stellar data for the photometric telescope is taken in sequences of five (5) images. A complete sequence consists of one image in each filter. All images of the same sequence have the same sequence number. The sequence number is defined by the exposure number of the first image in the sequence. One should check each sequence with flavor Pri or Sec in the mdReport file to be certain that all images contained have the same number. The sequence id number is the second column in the mdReport file.

E) There are possibly a few images scattered throughout the night which are listed as Pri or Sec flavor, but are not complete sequences. Many of these are actually focus images. Check the observers log for verification. Focus images should have their flavor set to "Focus".

F) All images that are not in standard star sequences should have sequence id numbers that match their exposure number. This includes Bias, Flat, and Focus images. If the image numbers are off for the night being reduced, then the sequence numbers may need to be changed.

G) A flavor of unknown for any image must be changed. Usually consulting the observers report will help determine what the correct flavor should be. The procedures in clean_report and preMtFrames will fix most sequences of primary standard stars and secondary patches. For this reason, most sequences of flavor unknown will be changed to "Man".

H) Some images in the mdReport file will have "filter" and "targetName" both set to unknown. In addition, the "expTime", "ra" and "dec" will all be set to -9999. These images need to be removed from the mdReport file. One can either comment them out by placing a pound symbol (#) at the beginning of the line, or simply delete the line.

I) Look for any missing entries in the mdReport file. One should look for individual missing exposure from a sequence or entire missing sequences. These can be spotted by comparing the mdReport file with the observing log. If an individual image is missing, simply cut-and-paste the immediately preceeding or following image. Then make any appropriate changes such as exposure number, filter letter, and exposure time.

J) Finally look for anything else unexpected in the mdReport file. Reading the observers comments is crucial. Often, the observer will tell you that a sequence or image is bad for any of various reasons. Getting the mdReport to an acceptable state is sometimes more of an art than a science. Therefore it may be helpful to ask for assistance if something really weird comes up.



After the mdReport has been editted, make a new "run" directory in the directory /data/dp3.a/mt/dp/apo20/run. You may need to make a new directory for the tapelabel first, then inside that directory make a subdirectory for the nights data. Inside the mjd directory, copy the edited mdReport file. Be sure to name the new mdReport file the same as the original mdReport file.
sdssdp3> cd /data/dp3.a/mt/dp/apo20/run
sdssdp3> mkdir JL1136
sdssdp3> cd JL1136
sdssdp3> mkdir mjd51318
sdssdp3> cd mjd51318
sdssdp3> cp ../../mtreports_raw/mdReport-51318b.par mdReport-51318.par
Print out the editted mdReport file in landscape format if you are keeping a paper trail. The following command works well for printing out the mdReport file in a readable format:
sdssdp3> a2ps -1 -l mdReport-51318.ps | flpr -q wh6e_hp5si


3). spool data from tape

One must retrieve the images from the DLT tapes and write them to disk in the spool directory. The first task is to find out which tape contains the data of interest, which was discussed above. Usually there is more than one tar file on a data tape, so one must also determine the correct tar file number. Once this is done, use OCS commands to get the tape mounted. One can then use mt and tar commands to retrieve the desired data from the tape.

Note: All "tape drive" commands should have "sdss30" replaced with the correct tapedrive being used.

To read the first tar file on the tape, one must skip over the tape label before reading in the tar file. The following commands will do this:
sdssdp3> mt -f `ocs_devfile -t sdss30` fsf 1
sdssdp3> tar -xvf `ocs_devfile -t sdss30`
If one wants to read in the second tar file ALSO, then two file markers must be skipped before the second tar file can be read. This will also work for any additional tar files.
sdssdp3> mt -f `ocs_devfile -t sdss30` fsf 2
sdssdp3> tar -xvf `ocs_devfile -t sdss30`
If one only wants the second tar file then use the following commands:
sdssdp3> mt -f `ocs_devfile -t sdss30` fsf 3
sdssdp3> tar -xvf `ocs_devfile -t sdss30`
A general rule applies here. To read in the Nth tar file from a tape, one must first skip forward (2N-1) file markers.

Look at the tutorial guide about spooling data from tape.


4). run preMtFrames

This part of the pipeline will set up the "run" directory to be able to run the remaining sections of the pipeline. In addition, it will do some preliminary QA. Besides making histograms of all bias and flatfield images, preMtFrames will check the status of the mdReport file. If the mdReport is found to have problems you will need to edit it further, then rerun preMtFrames.

preMtFrames may be run repeatedly with no problems. Each time it checks what has already been done and only does what needs done.

To run preMtFrames, one must know the complete pathname of both the "spool" and "run" directories. One then uses the following command:
mtpipe> preMtFrames /data/dp3.a/mt/dp/apo20/spool/JL1136/51318 \
         /data/dp3.a/mt/dp/apo20/run/JL1136/mjd51318  APO20  51318  1
After this finishes, create the symbolic link from the listed_by_mjd directory to the tape-label directory.
mtpipe> cd /data/dp3.a/mt/dp/apo20/run/listed_by_mjd
mtpipe> ln -s /data/dp3.a/mt/dp/apo20/run/JL1136/mjd51318 mjd51318
Look at the tutorial guide about running preMtFrames.


5). QA of bias and flat images

Two of the files created by preMtFrames contain histograms of the counts in the bias and flatfield images for that night. These files are:
hgBiasFrames-51318.ps
hgFlatFrames-51318.ps
Use ghostview to look at these files.

The bias histograms should each have two peaks, because the CCD chip in the PT camera is a dual amp CCD. Check that the peaks are narrow and at roughly the same counts level (x-axis) in all images. Sometimes, the first bias of a sequence has significantly higher counts than the others. Edit the mdReport file to change the quality of any "bad" bias images to "bad".

Inspect the flat histograms. Because the flatfield sequences are taken like stellar sequences, i.e. one in each filter, one needs to compare the histograms for each filter with other histograms of the same filter. Again, if there is a problem, change the quality of the image to "bad".

Print out both of these files if your are keeping a paper trail.


6). run mtFrames

This is a long process, roughly 8 hours. It should be broken into two calls. The first will reduce the bias and flat field images. The second will used these images to reduce the stellar images.

sdssdp3> cd /data/dp3.a/mt/dp/apo20/run/listed_by_mjd/mjd51318
sdssdp3> mtpipe -command "mtFrames -onlyflat -verbose=3" \
	        >>&! mtFrames.out &
sdssdp3> mtpipe -command "mtFrames -skipbias -skipflat -dropobj \
	   -dropimages=5 -verbose=2" >>&! mtFrames.out &
or in bash. The STDOUT and STDERR from mtFrames will be put in the file mtFrames.out. One can "watch" this output by using the tail command.
sdssdp3> tail -f mtFrames.out
When mtFrames ends, you will need to do a CTRL-C to exit out of tail.

Look at the tutorial guide about creating master flatfields and bias frames.
Look at the tutorial guide about reducing starfields.


7). QA from mtFrames

Inspect the postscript file qa_mtFrames-51318.ps with ghostview. Check that the sky counts are low on the first plot. Check that the FWHM is reasonable, hopefully below 4, on the second plot. Check that the number of objects found is reasonable on the third plot. The main thing to check on this third plot is that there aren't too many images with zero (0) objects found. Currently, the scaling for this plot is not optimized, so many images will have the number of objects found maxed out at 300.

Print out all plots in the qa_mtFrames-51318.ps file if you are keeping a paper trail.

Typically, everything looks reasonable at this step. So you are probably ready to move on. But you can look at qa_mtFrames-51318.ps for comparison.

Look at the tutorial guide about performing QA on mtFrames output.


8). run excal

This step will actually calculate the photometric solution for the primary standard stars observed for the night. It is an interactive process which permits the user to delete data points from the solution fit.

However, first we want to check the internal consistency of the standard star file. This is done with the following commands:
mtpipe> set fcDir [envscan \$MTSTDS_DIR]/primary
mtpipe> check_standard_file $fcDir metaFC.fit
This can often be omitted. But since it doesn't take very long to run, we will make the check here. If there are problems you can look at the tutorial guide about internal consistency of the standard star file.


To actually run excal, one should use the following commands. Again the tail command will allow you to follow the output.
sdssdp3> cd /data/dp3.a/mt/dp/apo20/run/listed_by_mjd/mjd51318
sdssdp3> mtpipe -command "excal -drop -verbose=4" >>&! excal.out &
sdssdp3> tail -f excal.out
or in bash. While determining the solution, you will be presented with plots showing the residuals between the data points and the fit. The horizontal dashed lines on these plots will show the rms residuals. We would like these rms values to be less than .02 in u and less than .01 in the other filters. If the residuals are greater than these values and there are points with large residuals on these plots, then we want to delete these points from the fit. Simply place the cursor near the point to be deleted, then press the "d" key. To undelete a point press the "u" key when the cursor is near the point of interest.

After you are happy with the plot from a given filter, press "RETURN" and you will be presented with the plot for the next filter in the sequence. Once all 5 filters have been examined, a new fit will be calculated if any points were deleted or if the rms residual is still above the limits stated above. You will again be presented with residual plots. You may again need to delete points if the residuals are still too large.

We want to keep the average residuals as low as possible, without deleting too many data points. Please don't get "delete-happy". If there aren't any points to delete and the residuals are still to high, then there is nothing to do but live with the high errors. In this case, excal may cycle through the sequence of plots several times before giving up. Just be patient.

A photometric solution is calculated for blocks during the night. The length of a block is defined by the "hoursPerSolution" keyword in the exParam.par file. Depending on the amount of data taken during a night, there may be 1, 2, 3 or more blocks. Increasing the value of this keyword will decrease the number of blocks in a night. This could cause the fit to work better in some cases.

The most common problem which causes excal to not run properly is the failure to find an astrometric solution. The first primary sequence is used to determine the astrometric solution for all sequences of the night. Some fields don't have very many bright stars and are not good for calculating an astrometric solution. If excal crashes while trying to determine the astrometric solution, try editing the mdReport file to place a different primary sequence first. Then re-run excal. Often this will solve the problem.

Look at the tutorial guide about calculating the photometric solution.


9). QA from excal

Inspect the postscript file qa_excal-51318.ps with ghostview. You want to look at each plot to make sure things look ok, but we are mainly interested in plots 1, 5 and 8. Print out all plots in this file if your are keeping a paper trail of the reduction.

In plot 1, make sure that there aren't many sequences with zero (0) objects matched. Less than five (5) sequences with zero (0) objects matched is acceptable.

In plot 5, make sure that the calculated zero points for each filter in each plot make sense. The value of these zero points is crucial and should not vary much from one night to another. It may be helpful to compare the values to those determined for other nights.

In plot 8, you want to make sure the error bars on the extinction points are not too large. Acceptable error bars should be no bigger than about 2-3 times the point size. If the error bars are larger, then you may want to decrease the number of blocks in the night (see suggestions under "run excal"). You also want to check the level of the extinction values to be certain that they agree with other determined values. Although the extinction values might vary slightly from one night to another.

Look at qa_excal-51318.ps for comparison.

You will also want to look at the html pages created by excal concerning the photometric solution for the night. This can be done by pointing your web browser at file:/data/dp3.a/mt/dp/apo20/run/listed_by_mjd/mjd51318/html/mtres-51318.html. If you are keeping a paper trail of the reduction, then print out the main web page titled "Monitor Telescope Excal Output" and the sky measurements page. These should be printed on one of the color printers on the 8th floor. Try the printer wh8w_tek380. You should also print out the photometric solution page on the standard laser printer, but in landscape format.

Look at the tutorial guide about performing QA on excal output.


10). run koGenMTGSC on sdssdp1

The koGenMTGSC procedure will produce a set of finding charts for the secondary patches. The finding charts will be fits images with names of the format kaGSC-*.fit, where * is replaced with the name of the secondary patch. You will need to run this procedure on sdssdp1.
sdssdp1> cd /data/dp3.a/mt/dp/apo20/run/listed_by_mjd/mjd51318
sdssdp1> setup ko
sdssdp1> ko
ko> koGenMTGSC mdReport-51318.par Sec 1.0
ko> sessionEnd
ko> quit
Look at the tutorial guide about creating charts for the secondary patches.


11). run kali

This part of the pipeline will actually calibrate the stars in the secondary patches using the photometric solution determined in excal for the primary standard stars. The following commands will run kali from within the run directory:

sdssdp3> cd /data/dp3.a/mt/dp/apo20/run/listed_by_mjd/mjd51318
sdssdp3> mtpipe -command "kali -verbose=4" >>&! kali.out &
or in bash. Then to follow the output being placed in the file kali.out simply use the tail command.
sdssdp3> tail -f kali.out
Look at the tutorial guide about calibrating the secondary patches.


12). QA from kali

The two postscript files you need to check after kali has finished running are:
qa_kali-51318.ps
qa_kali-51318_MTsanity.ps
The first of these, qa_kali-51318.ps, contains 5 plots. We are most concerned with the last 3, but the first is also important.
The first plot shows the percentage of objects matched in each Secondary patch during the night. You want to check to make sure that there aren't many sequences with zero (0) matches.

The last three (3) plots show color-color plots for the stars in the Secondary patches. There are curves on the first two (2) of these plots showing the positions of main sequence stars. We expect some scatter in these plots, but basically the bulk of the data points should follow these curves. In the third plot, the stars are tightly grouped just to the upper right of (0,0) then there is a tail towards the upper right. The main thing to look for in these plots is that most stars fall along a single sequence. The presence of a parallel sequence is a sure sign of trouble.
The second file, qa_kali-51318_MTsanity.ps, compares determined stellar magnitudes for stars in overlap regions between Secondary patches. Not all Secondary patches have overlap, so there may not be many data points in the plot for the night you reduce. This file consist of a single page of 15 plots divided into three (3) rows. Each row consists of different types of plots which try to illustrate the variations in stellar magnitudes for the same stars observed in two Secondary patches. Within a row there are five (5) plots, one for each filter, u, g, r, i, z.
The first row of plots show the raw magnitude difference as a function of magnitude for stars in overlap regions. In each plot the distribution of points should be narrow at the left edge, centered around zero (0). The distribution should spread out as one moves towards fainter stars, towards the right. If the distribution is centered significantly above or below a zero (0) magnitude difference then there may be a photometric offset in one of the patches providing overlap regions. This suggests that there are problems with the photometric solutions at some stage of the reduction.

The second row shows the magnitude difference in units of standard deviations. The distribution should be roughly uniformly wide at all magnitude levels and centered on a zero (0) difference. Again, look for distributions that are significantly offset from a zero difference.

The third row contains histograms of the magnitude differences in units of standard deviations. The distributions in these plots should be roughly gaussian shaped with about 68% of the data with two (2) sigma of the center, which should be near zero (0).
If you notice problems with any of the kali output, some stage of the reduction must be redone. As of now, deciding what went wrong and why is difficult at best. The best guess is to talk with either Douglas Tucker or Bruce Greenawalt. Look at the tutorial guide about performing QA on kali output.



13). mtFramesPrepare - replaces steps 2 & 3

This automation script in the dp product will spool in the data and run preMtFrames. It will then create the symbolic link in the listed_by_mjd directory to the run directory.

Look at more detail concerning the mtFramesPrepare script.


14). submit mtFrames - replaces step 5

This automation script in the dp product will process all images through the mtFrames part of the pipeline. It reduces the bias, flat and stellar images.

Look at more detail concerning the submit mtFrames script.


15). submit mtKali - replaces steps 9 & 10

This automation script in the dp product will find/create the charts for the secondary patches, then it will calibrate the secondary patches using the photometric solution determined for the primary standard stars.

Look at more detail concerning the submit mtKali script.



Last updated: 2 June 1999
Bruce Greenawalt (bgreen@fnal.gov)