Responding PI: Stan Woosley
A. Understanding the Primary Goals and Objectives of the Application:
We are seeking to understand how supernovae work, how some stars die in the most energetic explosions in the universe. We will accomplish this goal by numerical simulation, chiefly in multi-dimensions, of the complex interplay of nuclear physics, particle physics, (magneto)hydrodynamics, turbulence, and equation of state that go on in these events. We also want to understand the origin of the elements, most of which have been synthesized in supernovae, and to be able to match, with our simulations, the rich detail of supernova
We also will provide a refereed archive of standard nuclear reaction rates for astrophysical application to the community in a computer- friendly format.
A website for our program, still under vigorous development, is http://www.supersci.org
Major Project Milestones:
Year 1:
Year 2:
Year 3
B. Understanding the Initial State of the Computational Application
For the core part of our project, we shall employ a suite of eight codes, several of which are already in a mature stage of development. These codes will calculate the presupernova evolution of stars, their explosion, and radiation transport in the resultant debris. In general, these codes operate sequentially in that the output of one - e.g., the presupernova code - is used as input for another - e.g, the supernova explosion code which in turn provides input to a radiation transport calculation. Some codes have duplicate functions to allow for cross comparison.
In response to specific questions in this section:
1. The hydro codes to be studied and optimized are running as one large system
2. Our approach to optimize the code using analytical and/or simulation models of the code modules to analyze execution at run time and enable us to reconfigure the parallel execution to optimize its performance.
3. We will use a combination of functional and data programming models.
4. We use our own monitoring codes as well as other tools available such as netlogger, NWS, etc.
5. The FLASH code runs on the ASCI machines. The machines at UCSC are running on a combination of ASCI machines and local Beowulf clusters, 16 and 32 CPUs (AMD 1.2 GHz). In fall, 2001, the UCSC codes will run on a new 256 CPU (AMD 1.2 GHz) Beowulf cluster on which the co-I's will have 25% of the time. At LANL, the codes are running on AVALON, a local Beowulf cluster developed by collaborator Mike Warren. At Arizona, they will initially use an SGI Origin and a cluster of workstations connected by a high speed network (gigabit Ethernet). Beginning this fall we expect to be major users at NERSC.
6. Yes, we have a need to a wide area network access in order to run our codes on the IBM-SP (seaborg), the T3E, and ASCI machines.
Part C: Understanding Your Code Development Plans for Meeting Your Objectives
1. Hydro Codes: Zeus-MP, and ASCII-Flash, Anelastic MHD code, 2D/3D radiation-hydro code using the method of short characteristics, Implicit Monte Carlo code, 3D SPH code, ZATHRAS - revisions to be made: the codes will run faster using our optimized techniques - we will develop models that enable us to optimize module executions at run time - we will develop a DEVS-based run time system to assist in controlling, managing and visualizing the results produced by our simulations.
2. Yes, we expect to face multilanguage integration issues. a. The languages will be C, Fortran 77, Fortran 90, C++, and Java b. A hybrid programming models (data parallel and functional parallel)
3. I am not sure about the numerical algorithms that we might need for each module
4. We have our own programming environments that we have developed such as ADViCE, CATALINA, and DEVS. In addition, we will use other tools and programming environments, etc.
5. Small jobs will run on local Beowulf clusters of which we have access to several. Typically these are AMD architecture using the PGI compiler. But the core of our research requires running on the largest computers available. We have time requested at NERSC and may continue to have access to all the unclassified ASCI machines.
6. Speed at out 4 sites vary from about 50 to 100 Mbps.
7. Arizona: ECE Building (ECE 224), Astronomy Department; UCSC Interdisciplinary Science Building.
8. We have a strong requirement for visualization. For steering and controlling the visualization, we will use mobile agents being developed at UofA and also the DEVS real-time environment.
Part D: Identifying the Most Critical Applied Math and Computer Science Needs for Meeting Your Key Objectives
We will interact with the following ISIC centers:
1. Solvers - reactive chemistry - we are interested in reactive nuclear chemistry and solving elliptical, hyperbolic, and Poisson equations, especially in the context of general relativity.
2. Locally Structured Grid Methods - we are very interested in combustion physics, nuclear and chemical combustion have many similarities, especially flame propagation in the presence of turbulence, distributed burning, transition to detonation, etc.
3. Terascale Simulation tools and technologies - most of our codes are grid based. Only one is adaptive mesh at the present time.
4. Performance - the design of efficient algorithms is also one of our core mission goals.
5. Common Component Architecture - our codes are mostly Fortran, C+ and C++ (LANL codes are chiefly C+, C++, others f77 and f90)
6. Scalable Systems Software - we expect our programs to run efficiently on a variety of parallel platforms from small Beowulfs to the largest ASCI machines.
7. Data Management - we will have to manage very large data sets generated by the 3D calculations.