From ggraham Fri Oct 29 14:54:46 1999 To: d0-pmcs@fnal.gov Summary of PMCS part of Algorithm's Meeting, 10/29/99 An agenda was presented for dicussion which included the following points : - We want to form a (wider) group of developers A. People to provide development advice B. People to provide code ( Currently from FMC++ : Roger Moore, Rick Genik; would like input from CMS : Chip Brock, ID groups : Serban, John Womersley, Dave Adams, John Hobbs - A series of regular meetings will be scheduled A. Focussed PMCS working group (off week Mondays) B. Regular progeress reports in Algorithms and MC meetings C. Regular summaries (like this) to d0-pmcs@fnal.gov - Greg will produce a draft technical specifications and requirements document that the PMCS working group will chew over at bi-weekly meetings and electronic discussion groups. A. Come to a consensus on requirements and major implementation issues. B. First draft by first meeting, 11/1 C. Milestones to be discussed Went into more detail on these points, and a more detailed discussion arose. - Do we take existing pmcs (essentially) and simply break it up into "cvs packages" to aid development or do we break up pmcs further into "framework packages". Pros and cons were discussed at length. "cvs packages" involves tightly developed code which may have the greatest control over timing and modelling, but the "framework packages" option has distinct advantages in flexibility and maintainability. Since the modelling problem has been solved by D0RECO already, *that* was not considered an issue. John Womersley pointed out that it would be nice to know for sure if timing were an issue. Greg will take existing pmcs and break it up into framework packages this weekend (somehow) and try to measure performance. ( I do not consider this test a measure of the final product timing ; but we should do something to measure the timing in the "framework" option short of building the finished product ... ) We hope to show that timing is not an issue, and Greg will hopes to get his green-belt in "Chunk Kung-Fu" this weekend. - The class structure of pmcs physics objects was presented. Dave Adams and Serban offered advice on how to refactor the classes to make the design more flexible. Namely, it was argued that the algorithms that do the smearing should be divorced from the data in the smeared classes. Greg argued that this was done in spirit if not in code; but agrees that a refactored class design could be used to better enforce this. Also, this will become more important in a multi-developer environment. - Milestones : We would like to set two major milestones at this time A. Documents : A draft document of technical specifications and requirements will be presented on Monday to the pmcs working group. We hope to chew on this document through two meeting cycles and produce a finished document to D0 by week of 11/29/99. B. We would like to have a "production release" in the spirit of d0gstar and RECO to coincide with MCC Phase III. We tentatively set a date of April, 2000 for this milestone to coincide roughly with RECO. C. Intermediate milestones will be debated during the PMCS working group meetings. Harry Melanson and John Hobbs pointed out that whatever product is released in April, we should take care that it is extendable and maintainable. We should have a design which does not paint PMCS into a corner, ie- does not preclude other tasks. - Conclusion : The most technical details of implementation will be further debated at the Monday PMCS working group meeting. Further comments : A meeting time has been fixed for the 9th circle, Mondays 1-3 PM bi-weekly, starting 11/1/99. ( I tried for mornings, but both video rooms were booked. ) Thank you again for your comments. -Greg Graham PS - If I have left out anything very important or misrepresented your comments, please drop me a line. (I will collate and send out a correction.) ****************************************************************************** Greg Graham ggraham@fnal.gov University of Maryland DZero Collaboration 630-840-2321 ****************************************************************************** From ggraham Fri Nov 5 07:12:34 1999 To: d0-pmcs@fnal.gov Summary (belated) of the Monday, 11/2/99 PMCS Working Group Meeting Greg apologized for slow progress over the weekend; more power outages were to blame. Greg reiterated a call for requirements and users of PMCS. We would like to have a better feeling for what D0 expects from PMCS so as to better engineer it to meet their needs. Who are the interested users, what are their speed and output requirements, etc. FMC++ is an interested user and wants to acheive at least 8 events per second with smeared physics objects and generator level objects in the output. Gustaaf suggested looking at the capabilities of the Run I fast monte carlos as a good place to get further information. The brief conversation with Chip Brock about CMS was also very helpful; and I would love even more input on that. Greg also agrees that framework packages are more flexible. The need for a timing test of a framework package based PMCS was re-iterated. Greg is working on that this week. Unless the outcome of this test is negative, we will move to a framework package based implementation of PMCS. This allows for the greatest flexibility in development. The issue of correlations among PMCS smeared output objects was raised. We think we can deal with this issue using LinkIndices between chunks; obviously great care must be given to this issue during development. It was reiterated that PMCS output should include standard D0 chunks for inter-operability with existing D0 analysis code and graphics viewers. The pmcsInputChunk was introduced (blame me) as a way to present information to subsequent pmcs smearing packages. The pmcsInputChunk would also hold generator level derived quantities like jets and recoil; and it would also present lists of generator level output objects like tracks, electrons, muons, and photons gleaned from the input MCKineChunk so that subsequent packages do not have to loop over the entire MCKineChunk to get the same information. (This can be extended to the current minimum bias overlay scheme as well, which adds other MCKineChunks to the in-time bucket.) Much interesting discussion this week has already focussed on the pmcsInputChunk; thank you very much. A possible breakdown of framework packages has been discussed. These might include (but would not be limited to) a pmcs_tracks package, pmcs_em, pmcs_muon, pmcs_jets, and pmcs_toycal. Please feel free to shout out your suggestions. Two special pmcs packages are envisioned - a pmcs_setup package which would be in charge of filling the pmcsInputChunk and inserting it into the event. Also, a pmcs_plotter package to make standard ntuples and/or root trees. Milestones were discussed, but no serious decisions were made. We would like to have opinions on the milestone that I proposed of April '00 (and tied to MCC99-3 progress) for having a working production version to the collaboration. Roger made it known that he would like to see a working product somewhat earlier than that. I think this is certainly possible, given that what we have now is fairly close (except for tuning the smearing routines). Finally, moving the meeting time was discussed. An early time is favored to accomodate our esteemed French colleagues - they would be crazy to skip a good French dinner for a meeting, and we don't want to put them in the position of having to choose. Therefore, Wednesday mornings at 9AM is proposed. Thanks, and I'm sorry this was late - Greg ****************************************************************************** Greg Graham ggraham@fnal.gov University of Maryland DZero Collaboration 630-840-2321 ****************************************************************************** From ggraham Fri Nov 19 07:46:30 1999 To: d0-pmcs@fnal.gov Subject: PMCS Meeting Minutes, 11/18/99 PMCS Working Group - 11/17/99 Wednesday, 9 AM The Far Circle (I) Greg mentioned the following news items : 1) A working draft of the PMCS specifications is on the web at www-d0.fnal.gov/~ggraham/pmcs/specs.ps. This is based on an earlier document and is intended to have more of the technical details brought to light. 2) A fast CPS simulation has been written by Phil Baringer and his student Patricia Wagner. The simulation does electrons and essentially outputs a CPSDigiChunk. We feel that this can be incorporated as a package in pmcs. 3) The timing test is underway. 4) Development of the framework option for PMCS is underway. ( Point of order : In order to do the timing test, a small amount of work had to be done to code up the framework option. If the test proves that it is too slow, we will of course go back. ) a) Development now taking place in t00.66.00; new RCPs implemented. b) We have a pmcsInputChunk c) We have a pmcs_setup package to fill the pmcsInputChunk d) We have a pmcs_ChTrk package to do tracking. (II) The pmcsInputChunk contains 1) Link Indices to StableParticles, VisibleParticles, ChargedParticles, Electrons, Muons, Photons. 2) std::list of generator level Jets 3) list of primary vertices (for each interaction - I think I glossed over this at the meeting) 4) MissingET (x, y, and scalar) 5) 2-lepton recoil These quantities are filled by the pmcs_setup package and are available through accessors for later processing by PMCS packages. Point of order : Serban objected to the use of Link Indices, and is quite right that pointers would be fine here. Point taken. ( The real reason I did it was to learn how to use link indices. They're cool. ) (III) Architecture of PMCS ( Diagram - I hope the spaces come out ) MCKineChunk (signal) MCKineChunk (min bias) | ________________________| | | \|/ \|/ pmcs_setup (package) | | \|/ pmcsInputChunk | ____________|______________________ | | | \|/ | | (low level branch) | | packages to do make | | low level chunks for | | further processing | | \|/ | (mid level branch) | \|/ (fast branch) packages to make high level objects directly In the document, Greg had proposed two branches based on which kinds of chunks were being made. Sarah analyzed Serban's diagram of RECO and came up with a better factorization of the problem requiring three branches. The "Baby Bear" branch is essentially the same, and has FAST BRANCH ----------- EMParticleChunk JetChunk TauChunk MuonChunk ChargedParticleChunk VertexCollChunk MissingET The "Mama Bear" Branch contains some fast objects and some lower level ones : MEDIUM BRANCH ------------- MuonChunk ChargedParticleChunk VertexCollChunk CPSClusterChunk FPSClusterChunk CalDataChunk ( which would then input to reco to make the rest of the high level chunks ) The "Papa Bear" Branch contains mostly low level objects. LOW LEVEL BRANCH ( OK - the "Slow Branch" - ---------------- I just can't bring myself to say it...) MuonChunk CPSClusterChunk FPSClusterChunk CalDataChunk CFTClusterChunk SMTGlblBCollectChunk SMTGlblDCollectChunk ( which would then input to reco to make GTrackChunk, VertexCollChunk to bring us to Medium Branch point. ) It wasn't stated ( I believe ) at the meeting, but I think our priority is to work on the FAST BRANCH first. (IV) Discussion There was already much discussion up to this point. Further points : Amber pointed out that this is already very complex, and worried that people would not be able to use it. The multiplicity of packages could involve interdependent sets of RCP files. Since we have a similar problem in other monte carlo generation that has been alleviated by a graphical user interface, Greg offered to write a GUI to control PMCS. The GUI could take care of modifying many RCP files synchronously. Amber thought that at the appropriate time (soon) it might be a good idea to seek advice from Jim Kowalkowski. Greg also thought that a review committee after that would be appropriate. Perhaps Jim could also be on the review committee. (V) As usual, if I left anything out or if I got it wrong, feel free to yell at me via Email (ggraham@fnal.gov) or listserv (d0-pmcs@fnal.gov) but you can't yell at me in person this week - I'm still in Maryland. Have a nice weekend. Once again, thanks to everyone who attended and is watching the Email on this mail list. Your input is sincerely appreciated, and we seek this advice in order to make a PMCS that will be as widely useful as possible. -Greg ****************************************************************************** Greg Graham ggraham@fnal.gov University of Maryland DZero Collaboration 630-840-2321 ****************************************************************************** From ggraham Fri Dec 10 15:05:58 1999 To: d0-pmcs@fnal.gov PMCS meeting Summary 12/1/99 (1) Timing tests. Greg evaluated the time to process an event in each of two versions of pmcs. The first version is the old monolithic implementation of pmcs (t59) and the second version is the newer implementation in framework packages (t66). The capabilities of each version include (g=generator level, s=smeared level) ------------------------------------------------------------------- Capabilities t59 t66 ------------------------------------------------------------------- tracks g+s g+s |eta|<4.2,pT>50MeV EM g+s g+s |eta|<2.5,Et>10GeV jets g+s g+s muons g g missing ET g+s g Recoil g+s g Vertices g g ------------------------------------------------------------------- No cone isolation was calculated. This test was done on d0mino. ( Previous benchmark : 0.1667 sec/event ttbar on d02ka. ) Both versions make HBOOK ntuples, and the t66 version implements three full framework packages for tracks, EM, and jets. Each package inserts a chunk into the event; so that this is a three chunk test of pmcs. The test used the SpeedShop profiler with a 30 msec sampling interval. t59 t66 ------------------------------------------------------------------- 200 ttbar events 1 22.05 s 22.59 s 2 22.23 s 0.11 s/ev 0.11 s/ev The difference in timing over 200 events (given the 30 msec sampling interval) is 0.24 s <= delta t <= 0.84 s. At worst, this indicates 4 msec per event on average increase from t59 to t66. In order to make sure that the timing tests were doing the same thing, I examined a butterfly listing of the two profiles and picked out the subroutines with the highest inclusive times. inclusive time in seconds --------------------------------------------------------------------- t59 Subroutine Name t66 13.68 MCconeAlgo:MakeClus 13.47 4.83 MCconeAlgo:getItems 4.98 3.72 d0om_ds Make 4.60 (~4.5 msec/event delta!) 2.28 stdlist::insert 2.52 2.52 L4vec::eta 1.98 1.68 makeing histograms 1.74 1.11 stdvec::getnode 1.11 1.32 new(uint) 1.29 0.81 L4vec::phi 1.08 The d0om_ds::make subroutine increases by almost a second from t59 to t66 and is perhaps a reflection of the extra chunk lookup done in t66. Greg felt that this was good news - the roughly 4 msec per event delta between t59 and t66 is much smaller than the gains demonstrated by making more intelligent choices with fiducial cuts, for instance. Gustaaf pointed out that the delta will go up if there are more chunks, especially if we use index chunks. Greg plans to include an option for the fast track to write and read information from pmcsInputchunk and pmcsOutputChunk only to remove the overhead from the chunk lookup. ( Amber and Greg had a discussion offline exploring different ways to do this ... Thanks Amber ! ) (2) Greg plans to break up pmcs in the following way : pmcsInputChunk pmcs_setup Fast Track ---------- EMparticleChunk pmcs_em JetChunk pmcs_jet TauChunk pmcs_tau MuonChunk pmcs_muon ChargedParticleChunk pmcs_chprt VertexCollChunk pmcs_vtx Medium Track ------------ CPSclusterChunk pmcs_cps FPSclusterChunk pmcs_fps CalDataChunk pmcs_calt (Towers) Slow Track ---------- CalDataChunk (cells,maybe) pmcs_calc CFTClusterChunk pmcs_cft SMTGlblBColl pmcs_smtb SMTGlblDColl pmcs_smtd Everyone agreed harmoniously that the focus should be on implementing the fast track packages first. However, the pmcs effort should not turn away volunteers who want to implement various pieces in which they have special interest or expertise if they are not fast track. (3) Sarah expressed a desire to see a root-based facility for tuning the pmcs parameters automatically given some input. The input will be reco'd MCC99 events to start with, but we should plan on doing this when we have data. This is a really desirable tool to have on hand when the data starts rolling in. (4) I apologize for being ten days late, ( I got the flu pretty bad last week... ) In the meantime, we have gotten ten cvs packages from Alan and Paul for pmcs. Greg is porting code into the new packages and continuing testing. Greg is at the same time implementing some ways to have different packages communicate with pmcsInputChunk and pmcsOutputChunk outside of the edm to avoid timing problems associated with lots of chunks (above, 1). Hasta, Greg ****************************************************************************** Greg Graham ggraham@fnal.gov University of Maryland DZero Collaboration 630-840-2321 ******************************************************************************