RemRun -- setup MDSplus based TRANSP runs accessing remote data =============================================================== (dmc 12 Mar 1998 -- this is only available on VMS at the moment). A remote TRANSP run production service can now be set up, for sites which use MDSplus to store the TRANSP input data and output data. This system has been set up and is working now for CMOD. With minimal effort other tokamak sites which use MDS+ could join. (updated dmc 16 Mar 2001) A standard for the MDS+ TRANSP tree has been established: see http://w3.pppl.gov/transp/MDSPLUS/tree.html we now assume this standard is being followed. That means the run request process is simpler, because, needed information can be found in expected places. Users of this service must make their TRANSP namelist and input data available in an MDS+ TRANSP tree in a way conforming to the standard. We do not plan a Ufiles-based remote run service. for more information write to dmccune@pppl.gov. ----------------------------------------------------------------------------- User information-- Request runs by sending a file .REQUEST to the agreed upon hidden writable anonymous ftp directory. may be chosen by the requestor. This file will contain information incoded in the format: # = where is a single uppercase word, and is a string. Usually is a single word, but for the keyword #COMMENT, can be a string of multiple lines. Here are required keywords: #MDS_SERVER = ... MDS+ server (e.g. "cmoda.pfc.mit.edu") #TREE = ... tree name (e.g. "transp") #TOK = ... tokamak id (e.g. "CMOD") #SHOT = ... shot number of MDS+ TRANSP tree #YEAR = ... four digit year-of-shot #EMAIL = ... run requestor's Email address The following are tree node references ("calls") to make upon termination of the job: #TRANSP_DONE = ... run completion (normal status) #TRANSP_ERROR = ... run completion (error status) The following is optional: if set, the MDS+ TRANSP tree should contain an "expert module", a fortran subroutine, at node "TOP:EXPERT_CODE". This is a rarely excercised option. #EXPERT_MODULE = TRUE, if an expert module is to be used. The following controls allow the MDS+ capable scruncher program to be used to set up the plasma boundary evolution input data for TRANSP. If these are not specified, then, the TRANSP input subtree must already contain the prescribed plasma boundary evolution. #SCRUNCHER_OPTION = .. SCRUNCHER data option (e.g. "CMOD"). #SCRUNCHER_ASYM = ... TRUE/FALSE, set TRUE for up-down asymmetric boundary. ................................ At run request time, the TRANSP MDS+ tree is expected to contain the following information: TOP:NAME_LIST -- the namelist for the run TOP:COMMENT -- run comments (can also be in request file) TOP:EXPERT_CODE -- (optional) special EXPERT subroutine (fortran code) for the run TOP:SOURCE_SHOT -- the shot number of the experiment from which the input data is derived. This is not always the same as the tree's shot number, in practice, for various reasons. TOP:T_OFFSET -- time offset for input data (optional-- 0.0 assumed). TOP:INPUTS... -- the input data subtree must be populated Execution of the run service will result in the following nodes being filled: TOP:DEVICE -- name of experiment (e.g. CMOD or D3D). TOP:DATE_BEGAN -- date-time of start of execution of TRANSP run TOP:NAME_OF_CODE -- "TRANSP" TOP:DATE_EXEFILE -- date-time stamp of TRANSP executable TOP:REMOTE_ID -- PPPL local runid used for the TRANSP run TOP:SHOT_IDATE -- PPPL "shot-year" 2 digit code, part of path to PPPL archived copy of run output files. and all the output data and multigraph nodes are written (and created if necessary): TOP:MAPPPING... TOP:TRANSP_OUT... TOP:OUTPUTS... TOP:MULTIGRAPHS... The following tree nodes are neither read nor written: TOP:MOD_LOG TOP:RUN_TYPE TOP:RUNID TOP:USER_ID ................................ Email is sent when the run is successfully entered into the PPPL run queue; another Email is sent when the run completes execution. Email will also be sent in case of error. On completion of a run, the log file _TRANSP.LOG is written to the output anonymous ftp server, and can be picked up by the user's software. The run number is written in the tree at node TOP:REMOTE_ID ... ................................ Limited functionality is available to view or manipulate runs already in the queue. These functions are requested by sending a file .ACTION to the ftp server. The .ACTION file contains a single line "action" command. Available actions are: RMONITOR DQ ... show the PPPL run queue. TR_CANCEL ... cancel the run, delete from PPPL system. TR_ARCHIVE ... halt run (prematurely), write output to tree and delete from PPPL system. TR_LOOK ... write interim output to tree, run continues execution and/or remains in the PPPL system is needed to identify the run to the run service. This piece of information was written into the user's MDS+ tree at TOP:REMOTE_ID when the run was initiated. ................................. What to do about aborted runs. ============================== Remote users should not ignore aborted runs -- these take up space on the PPPL servers. Rather, the user needs to decide whether the run should be deleted, archived "as is", or if PPPL assistance is needed to debug it. For all options but the last, this is accomplished by writing a one line .ACTION file to the ftp server, containing the string given below. In these .ACTION file command strings, is your local device code, e.g. "CMOD". is the PPPL-server's local id for the run. This was written into the tree node TOP:REMOTE_ID at run start time. Specifically: To look at the interim output of the partial run, without deciding: --use "TR_LOOK " in the .ACTION file... the data is written to your TRANSP MDS+ tree and you can examine this with your local software; the run remains on the PPPL server. To "archive" the run "as is", removing it from the PPPL server: --use "TR_ARCHIVE " in the .ACTION file... the data is written to your TRANSP MDS+ tree and you can examine this with your local software. To delete the run from the PPPL server without saving the data: --use "TR_CANCEL " in the .ACTION file... no data is written to the local MDS+ tree. (Your local GUI software *should* have buttons for selecting these options-- consult your local computer experts). If you suspect that your run has failed due to a TRANSP bug, and you want a PPPL expert to investigate, then, send email to transp@pppl.gov which should include as much information as you can provide as to what the problem might be. ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- info for the PPPL maintainer ... Synopsis of setup procedure -- a) setup "hidden" anonymous ftp directories (1 for requests, 1 for log file output). b) know the client tree structures containing the TRANSP data-- i.e. know the TRANSP MDS+ tree standard. ----------------------------------------------------------------------------- The relevant PPPL run production system components are: source_manager:TRMANAGER.COM -- runs allcom:getlftp.com ... to fetch all requests on anonftp server runs allcom:ftprun.com ... to process the run request(s). [ftprun.com does stuff to enqueue the run so that it looks like a normal, locally generated reqeust]. (rest of TRMANAGER runs as before) allcom:GETLFTP.COM -- uses allcom:ANONFTP.COM and other DCL to list contents of and fetch files from the anon. ftp server. request files are copied to trhome:[transp.ftpreq.*] with a subdirectory for each tokamak. Files older than 30 days are removed from this directory tree, every night. allcom:FTPRUN.COM -- runs allcom:REQ_LOOK.COM to partially parse the run request file and do rudimentary validity checks. REQ_LOOK.COM also selects a locally valid RUNID for the run, and runs a program MDSMARK to write the local RUNID into the TOP:REMOTE_ID node in the requestor's TRANSP tree. runs allcom:FTP_ENQUEUE.COM to fully validate and then enqueue the run, or else Email an error report to the requestor. working files are stored in trhome:[transp.ftprun.*] with a subdirectory for each tokamak. Files older than 30 days are removed from this directory tree, every night. The trhome:[transp.ftprun.] directory is used as a local staging directory for the enqueueing of the run. allcom:FTP_ENQUEUE.COM -- runs an IDL procedure called by SOURCE_MDSPRO:MDS_FETCH_NML.COM to fetch the namelist out of the run's MDSplus tree. It also uses the same procedure to fetch run comments, and, if requested, an "expert file", a user supplied fortran subroutine that can be utilized for special purpose one time modifications to TRANSP operation. runs EXE:REMRUN.EXE to fully parse the request file, set up scripts for PRTRAN and MDSPLOT (the TRANSP output tree writer), and modify the namelist to allow remote access to the input data by TRDAT. A script for SCRUNCHER may also be produced. (if requested in the request file) runs EXE:SCRUNCHER.EXE in remote MDSplus mode to prepare the TRANSP plasma boundary moments input data. runs EXE:PRTRAN.EXE to set up the local run DCL batch files, and allow entry of comments on the run. runs EXE:TRDAT.EXE to verify access to all input data. **note** each of the above procedures leads to creation of its own LOGFILE which hangs around for 30 days in TRHOME:[TRANSP.FTPRUN.]: _MDS_FETCH.LOG _REMRUN.LOG _SCRUNCH.LOG _PRTRANLOG _TRDAT.LOG runs an `enqueue' command to queue the run to the production system. allcom:CKSPECIAL.COM -- the PRTRAN generated DCL now includes a call to this procedure. If this procedure detects the presence of MDSPLOT scripts, it runs them, writing TRANSP run output to the remote MDSplus server. CKSPECIAL will also copy the run .log file and the run .msg file to an outgoing directory on the anonymous ftp server, which the remote site may want to pick up. ------------------------------------------- more on scruncher... if a service is to be set up for a site other than CMOD, the trickiest part of the setup will be dealing with SCRUNCHER. There is a special and non-generalizable CMOD path through SCRUNCHER; there should be a generic path, but the design would require some feedback from the next customer wanting the service. Questions to be answered: -> support symmetric/asymmetric output or asymmetric only? -> read control options from UREAD or from MDSplus tree -> if read from UREAD: generate comments to record what was done in the MDSplus tree? -> write MDSplus results (and comments) where in tree? -> if tree nodes do not exist: create them? or exit with error? Here are the questions that the current SCRUNCHER program (generic MDSplus path as currently developed, 3/19/2001) needs answered (from UREAD at the moment), assuming reasonable defaults on unessential options: input data server, tree, shot no. no. of moments in bdy representation updown symmetric or asymmetric asymmetric -> 1 3d signal symmetric -> 2 1d signals & 2 2d signals might want to require asymmetric: more general choice output data server, tree, shot no., path