NASA Logo, National Aeronautics and Space Administration
Computational Technologies Project

SCIENCE TEAM I ABSTRACTS

Title: Understanding Solar Activity and Heliospheric Dynamics: A Grand Challenge for HPCC

Principal Investigator:

John H. Gardner, Naval Research Laboratory

Co-Investigators:

Spiro Antiochos, Naval Research Laboratory

C. Richard DeVore, Naval Research Laboratory

Jay Boris, Naval Research Laboratory

Judith Karpen, Naval Research Laboratory

Russell Dahlburg, Naval Research Laboratory

Lee Phillips, Naval Research Laboratory

Joseph Davila, NASA/Goddard Space Flight Center

Abstract:

We propose a three-year program, combining our expertise in algorithm development for parallel computer architectures with our extensive experience in solar and heliospheric modeling, to provide NASA with advanced numerical modeling capabilities for the HPCC program and Grand Challenge applications. Our initial efforts will emphasize conversion of proven algorithms to parallel architectures in order to provide NASA as rapidly as possible with scaling information for future machines. In the second and third years we will employ the knowledge gained from previous experience in exploiting the new architectures to realize the overall goal of eventual teraflop performance. Throughout the program, by applying our codes to well defined tractable problems, we will emphasize obtaining results which can be compared with data from ongoing and future NASA experiments, including YOHKOH, HRTS, and SOHO.


Title: Large Scale Structure and Galaxy Formation

Principal Investigator:

George Lake, University of Washington

Co-Investigators:

Loyce Adams, University of Washington

Craig Hogan, University of Washington

Richard Anderson, University of Washington

Edward Lazowska, University of Washington

James Bardeen, University of Washington

Wesley Petersen, Eldgerossische Technische Hochschule, Zurich

Raymond Carlberg, University of Toronto

Lawrence Snyder, University of Washington

Abstract:

To use NASA's Great Observatories to test the "standard model" for the origin of galaxies and large-scale structure by accurately evolving it into its present highly nonlinear state. Our multidisciplinary team will develop the tools needed for high performance N-body simulations. We will model the evolution of the intergalactic medium and the distribution of mass and luminosity. Our work is aimed at the broad advancement of parallel simulation by coordinated work in physical and computational science. By using methods that promote "portable parallelism" (developed by our group), we will build tools that are useful for both astrophysical simulation and the evaluation of testbeds. We describe plans to use testbeds that are not part of NASA's first generation. To insure rapid scientific advances, codes and data products will be released rapidly and supported through a Participating Scientist Program.


Title: Development of an Earth System Model: Atmosphere/Ocean Dynamics and Tracers Chemistry

Principal Investigator:

Roberto Mechoso, University of California, Los Angeles

Co-Investigators:

Akio Arakawa, University of California, Los Angeles

George Philander, Princeton University

James Demmel, University of California, Berkeley

Richard Turco, University of California, Los Angeles

Jeff Dozier, University of California, Santa Barbara

Michael Stonebraker, University of California, Berkeley

David Halpern, Jet Propulsion Laboratory

Donald Wuebbles, Lawrence Livermore National Laboratory

Abstract:

This project aims to develop a model of the coupled global atmosphere-global ocean system, including chemical tracers that are found in, and may be exchanged between the atmosphere and the oceans. We will use the model to study the general circulation of the coupled atmosphere-ocean system, the global geochemical carbon cycle, and the global chemistry of the troposphere and stratosphere.

The proposed model development is based on three major components: 1) The UCLA atmospheric general circulation model (GCM), 2) the GFDL/Princeton University oceanic GCM, and 3) the NASA Ames/UCLA chemical tracer model, including heterogeneous (surface) chemical processes. We propose to combine these components to produce a model with the following characteristics: a) a highly modular structure suitable for massively parallel computer environments, by an optimized code, and c) a configuration that can be distributed among homogeneous and heterogeneous computer environments.


Title: Data Analysis and Knowledge Discovery in Geophysical Databases

Principal Investigator:

Richard Muntz, University of California, Los Angeles

Co-Investigators:

Leon Alkalaj, Jet Propulsion Laboratory

Josef Skrzypek, University of California, Los Angeles

Daniel McCleese, Jet Propulsion Laboratory

Carlo Zaniolo, University of California, Los Angeles

Roberto Mechoso, University of California, Los Angeles

Abstract:

This research will demonstrate the applicability of information systems for geophysical databases to support cooperative research in earth-science projects. The Testbed includes atmospheric model data, satellite stratospheric data, and climate data. Characteristic of the se applications is the identification and monitoring of complex patterns, and their evolution over space and time. We will study novel indexing and abstraction techniques for efficient search and monitoring of massive data sets maintained on tertiary store, and the optimization of complex spatial-temporal queries and rules. An implementation based on mass-storage system and an intelligent front-end is planned. Supercomputer testbeds will be used for parallel search and computation intensive functions.


Title: High Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems

Principal Investigator:

Richard B. Rood, NASA/Goddard Space Flight Center

Co-Investigators:

Stephen Cohn, NASA/Goddard Space Flight Center

Peter Lyster, Jet Propulsion Laboratory

Anne Douglass, NASA/Goddard Space Flight Center

Kim Mills, Syracuse University

Geoffrey Fox, Syracuse University

Mark Schoeberl, NASA/Goddard Space Flight Center

Leslie Lait, NASA/Goddard Space Flight Center

Jose Zero, NASA/Goddard Space Flight Center

Abstract:

The proposal focuses on the grand challenge of four-dimensional data assimilation to produce research-quality data sets for Earth Science studies. This involves the collection of diverse Earth observational data sets, and the incorporation of these data into models of the ocean, land surface, and atmosphere, including chemical processes. Ultimately the goal of data assimilation is the calculation of consistent, uniform, spatial and temporal representations of the Earth environment that can be used for scientific analysis and synthesis.


Title: Convective Turbulence and Mixing Astrophysics

Principal Investigator:

Robert Rosner, University of Chicago

Co-Investigators:

N. Brummell, University of Colorado

D. Grunwald, University of Colorado

R. Stein, Michigan State University

F. Cattaneo, University of Chicago

A. Malagoli, University of Chicago

R. Stevens, Argonne National Laboratory

T. Dupont, University of Chicago

O. McBryan, University of Colorado

J. Toomre, University of Colorado

P. Fox, NCAR/HAO

L. Ni, Michigan State University

J. Truran, University of Chicago

Abstract:

Turbulent convection plays a central role in many astrophysical circumstances, ranging from the transport of energy from the stellar and planetary interiors to their surfaces, to the mixing of specific angular momentum and dynamo activity in stars, planets, and possibly accretion disks, to finally the highly dynamic mixing encountered in novae and supernovae. We propose to develop the next generation of multi-dimensional hydrodynamic codes to attack these problems, based on the use of massively parallel machines, establishing testbeds and scalable codes from several approaches. These simulations are unique in their simultaneous demands for computing power, data handling capabilities, and scientific visualization.


Title: Development of Algorithms for Climate Models Scalable to TeraFLOP Performance

Principal Investigator:

Max J. Suarez, NASA/Goddard Space Flight Center

Co-Investigators:

Anthony Kowalski, NASA/Goddard Space Flight Center

Paul Schopf, NASA/Goddard Space Flight Center

Abstract:

It is proposed to develop a high-resolution global climate model capable of centuries-long calculations on massively parallel machines at teraFLOP speed. Preliminary estimates indicate that a model fast enough for such lengthy integrations would have sufficient data/functional parallelism to achieve this goal. Taking advantage of the parallelism, however, will require the development of architecture-dependent computational algorithms.

We propose to consider a series of prototype problems to provide a suite of benchmarks in climate modeling that can be used through the architectural developments leading to the teraFLOP machine, and which will lead to the eventual implementation of an entire climate or general circulation model (GCM). This approach will allow us to begin to learn and evaluate the testbed architectures immediately. The problems are chosen to be representative of the computational difficulties that will be encountered in implementing the formulations typically used in climate models on massively parallel machines. Algorithms to deal with these problems on the testbed architectures will be developed, implemented, and tested with a view to their scalability to teraFLOP speeds. The proposed prototype problems include: 1) the implementation of high-resolution grid point ocean dynamics on irregular basins; 2) the load balancing of physical parameterizations requiring various degrees of conditional execution; 3) the possibilities for high-level functional decomposition; 4) the reorganization of the data to make use of different domain decomposition strategies in different modules of the model.

In addition to the work on computational algorithms, we propose to consider two ancillary problems: (1) the reduction and management of output data and (2) the development of general codes that allow different types of decomposition and can be maintained across the various testbed machines. We feel that progress in solving these problems is critical to the success of the development effort and to the end-to-end performance of the resulting system.


Title: Application of Scalable Hierarchical Particle Algorithms to Cosmology and Accretion Astrophysics

Principal Investigator:

Wojciech H. Zurek, Los Alamos National Laboratory

Co-Investigators:

Geoffrey Fox, Syracuse University

Pablo Lagunar, Pennsylvania State University

John Salmon, California Institute of Technology

Melvyn Davies, California Institute of Technology

Warner Miller, Los Alamos National Laboratory Laboratory

Michael Warren, Los Alamos National Laboratory Laboratory

Jack Hills, Los Alamos National Laboratory

Peter Quinn, Australian National University

Abstract:

We will develop parallel, scalable particle codes (N-body, smoothed particle hydrodynamic (SPH), and hybrid) based on hierarchical tree data structures and use them to study astrophysical problems. This work will build on our already successful parallel implementation of a purely gravitational cosmological treecode (which has demonstrated production simulation performance in excess of 5 Gflops on the Intel Touchstone Delta, with N=ten million). We shall focus on (i.) dissipationless structure information on both sub-galactic and large scales, (ii.) use the hybrid N-body/SPH code to study infall of baryons and the origin of the luminous parts of galaxies. We shall also use parallel SPH to study (iii.) tidal disruption of stars in both non-relativistic and general relativistic contexts, and (iv.) accretion onto a rotation black hole. Moreover, we shall, (a.) abstract the relevant features of the parallel tree data structures for use in other applications, (b.) study various approaches to the analysis and visualization of our data. The computational aspects of the project will be carried out entirely on massively parallel processors.


Title: Cloud Identification Using Genetic Algorithms and Massively Parallel Computation

Principal Investigator:

Bill Buckles, Tulane University

Co-Investigators:

Frederick Petry, Tulane University

Abstract:

The NASA/GSFC MasPar MP-1 testbed will be used to develop a SIMD genetic algorithm (GA) to identify cloud species/subspecies. GA's are search procedures patterned after the mechanics of natural selection. Input will consist of segmented satellite images with irradiance measures, secondary measures such as co-occurrence matrices, and other information that enables better models for predicting cloud albedo and optical depth a s needed in general circulation models. Present methods, utilizing only irradiance measures, are successful only in discovering the amount of high, middle, low and sometimes deep convective cloudiness over terrains offering high contrast. Additional bene fits include possible improvement in methods for determining cloud cover and changes in cloud cover which are needed in global environmental studies.


Title: Planetesimals to Planets: A Study of Formation of Planetary Systems

Principal Investigator:

Donald Davis, Planetary Science Institute

Co-Investigators:

F. Marzari, University of Padua (Italy)

S.J. Weidenschilling, Planetary Science Institute

H. Scholl, Observatoire de Nice

We propose to adapt a numerical simulation of planet formation to parallel architecture featuring multiple-instruction multiple-data (MIMD) capability. We have developed the first numerical code that can simultaneously treat a range of heliocentric distances and include all important physical processes (Spaute et al., 1991; Weidenschilling et al., 1992). The fundamental structure of the codes is ideally suited to a parallel computer in tat the simulation proceeds through a series of timesteps but the changes in the parameters of the system are independent of ea ch other during the timestep. The number of computations increases rapidly as we expand the range of heliocentric distances in the simulation and would saturate even the fastest serial machine long before a realistic simulation of the formation of a solar system could be done. We can dramatically reduce the execution time by computing the changes during a timestep in a parallel fashion. Thus we could, in principle, model the complete formation of a planetary system ranging from the inner terrestrial plan et region to that of the massive, gas rich outer planets. Our approach is directly scalable with the number of available processors up to the point where we can assign a processor to each target/projectile. Such an approach represents a logical extension of numerical modeling and will enable major progress to be made in understanding the origin of our solar system.


Title: Parallel Decomposition Optimization Algorithms for Large-Scale Neural Network Implementation on MIMD Machines and Applications in Virtual Reality Image Transformation

Principal Investigator:

Xin Feng, Marquette University

Abstract:

This research will develop parallel processing neural network algorithms on MIMD machines by using primal-dual decomposition optimization technology. This method can algorithmically decompose a large scale optimization problem with additive structures in to numerous smaller problems, each of which can be implemented independently. One of the unique features is that decomposition takes place at the algorithm level, which will significantly reduce the complexity of parallelism, and is suitable for MIMD implementation. This technology is particularly usefully for neural networks because typical neural networks do possess additive structures. The findings of research will significantly accelerate the implementation of neural networks, and will promote neural network applications in the space program and other fields where large size neural networks are required for solving sophisticated problems. Also and immediate application of real time nonlinear image transformation is pace station virtual reality workstation design will be pursued.


Title: Implementation of Helioseismic Data Reduction and Diagnostic Techniques on Massively Parallel Architectures

Principal Investigator:

Sylvian Korzennik, Smithsonian Astrophysical Observatory

Co-Investigators:

Robert Noyes, Smithsonian Astrophysical Observatory

Philip Scherrer, Stanford University

Edward Rhodes, Jr., University of Southern California

Abstract:

We propose to further implement helioseismic data reduction pipeline on massively parallel architectures, and to port some of the analysis techniques to such high performance platforms. Our recent completion of a pilot project on the Intel Touchstone Delta supercomputer has demonstrated the potential of such a platform by significantly increasing the end-to-end throughput in data reduction of high-resolution helioseismic observations. Based on the success of this migration, we propose a threefold program: a) to further explore and demonstrate the benefit of massively parallel architectures in high spatial-resolution helioseismology; b) to investigate the potential of fine grain parallelism of "non-standard" algorithms; and c) to investigate the benefit of massively parallel architectures for the forward and inverse problems associated with helioseismic data analysis, and more specifically the full two dimensional rotation inverse problem.


Title: Image Compression Using Human Visual System Models, Wavelet Transform Coding, and Massively Parallelizable Algorithms

Principal Investigator:

V. John Mathews, University of Utah

Abstract:

Traditional approaches to image compression remove only statistical redundancies. Algorithms that reduce psychophysical redundancies in addition to the statistical redundancies perform much better than the traditional approaches. Such algorithms require knowledge of the human visual system in order to identify and remove the psychophysical redundancies present in the images. In this work, we propose to study image compression algorithms that are equipped with models of the human visual system. Wavelet transform coding, because of its multiresolution properties is particularly suitable for use with visual system models. In order to extract the most performance out of the coders, we will also select most parameters of the system dynamically. Such algorithms are extremely useful in applications involving browsing and progressive transmission, among others. However, they tend to be computationally complex. Therefore, we will be particularly interested in developing algorithms that are parallelizable and, consequently, can be implemented efficiently on massively parallel systems.


Title: Musculoskeletal Models and Computational Algorithms for the Numerical Simulation of Human Motion on Earth and in Space

Principal Investigator:

Marcus Pandy, University of Texas, Austin

Abstract:

The overall goal of this research is to develop a computational algorithm for massively-parallel architectures which will allow us to solve previously intractable, large-scale, optimal control problems for human movement. In particular, we propose to solve a detailed optimal control problem for walking to determine the complete time histories for all muscle forces and bone loading patterns on Earth and in Space. In doing so, we will evaluate and compare computational performance of MIMD parallel machines with the Cray Y-MP. For the Intel Touchstone and CM-5 machines, we will also evaluate the scalability of our computational algorithm for up to 128 processor nodes. Based upon these results, we will recommend specific testbed architectures for solving very large-scale optimal control problems for human movement requiring teraFLOPS computing. A potent result of this work will be the creation of a general optimal control algorithm tailored specifically to MIMD architectures.


Title: High Performance Morphological Filtering of Cirrus Emission from Infrared Images

Principal Investigator:

Jeffrey Pedelty, NASA/Goddard Space Flight Center

Co-Investigators:

Philip Appleton, Iowa State University

John Basart, Iowa State University

Abstract:

We have developed a filter based on mathematical morphology which shows great promise in removing cirrus emission from all-sky images generated by the Infrared Astronomy Satellite (IRAS). We propose to implement this cirrus filter on the HPCC testbeds and to characterize the performance of the morphology operators as implemented in both Fortran and C-based languages. This characterization will include machines at the ISU Scalable Computing Laboratory as well. After an initial testing and important algorithm refinement period, we expect to apply the cirrus filter to the entire IRAS database and deliver to the NSSDC an archive of cirrus-free images. These images will allow the astronomical community to have access to cirrus-free infrared fluxes for galaxies, and may also reveal many previously hidden unusual objects such as tidal filaments and protogalaxies.


Title: Development of Scalable Algorithms for Numerical Relativity

Principal Investigator:

Paul Saylor, University of Illinois at Urbana-Champaign

Co-Investigators:

Faisal Saied, University of Illinois at Urbana-Champaign

Edward Seidel, University of Illinois at Urbana-Champaign

Abstract: The Einstein equations of general relativity are a complex set of coupled, nonlinear, elliptic and hyperbolic partial differential equations. We propose to use these equations as a driver for exploring efficient numerical algorithms for solving such systems on scalable, massively parallel machines, such as the NCSA 512 node CM-5 from Thinking Machines, Inc. The specific physical problem that we will address involves 2- and 3D calculations of dynamical black hole spacetimes, including the spiraling collision of two black holes in orbit about each other. Because the elliptic part of such coupled systems is generally the bottleneck in the numerical solution, we will concentrate on efficient methods for the elliptic equations arising in the theory. Both Krylov subspace and multigrid methods are proposed for the use in solving these systems. Much development work at Illinois focuses on the multi-grid method for parallel processors. For the multi-grid method, anisotropies due to unbounded growth in the components of the equations must be treated, and it is proposed to study this. Since the Einstein equations present very general elliptic operators, and furthermore since the coupled elliptic-hyperbolic system is found in many other physical systems, this would have direct application to a large family of problems in other areas of science and engineering.


Title: Content-Based Query and Browse of Earth Science Imagery Databases Using High Performance Computers and Networks

Principal Investigator:

Robert Schowengerdt, University of Arizona

Co-Investigators:

Marjory Johnson, NASA/Ames Research Center

Charles Turner, University of Arizona

Larry Peterson, University of Arizona

Abstract:

We will investigate the development of algorithms that can search large amounts of imagery for similar spatial, spectral and temporal patterns. We believe that this computation intensive approach to database browsing will be feasible with the implementation of gigaFLOP testbeds under the HPCC/ESS program, and can be made operational with later teraFLOP systems. Our research will consist of recognition algorithms using neural network architectures. The algorithms will be tested in the high speed computation and network environment provided by NASA within the context of a user browsing a large remote database and searching for specific patterns. A primary goal will be minimization of image data transfer over networks by optimal assignment of local workstation and remote parallel computer resources.


Title: Development and Application of a Three Dimensional Radiation Magnetohydrodynamic Code for Massively Parallel Architectures

Principal Investigator:

James Stone, University of Maryland, College Park

Co-Investigators:

John Hawley, University of Virginia

Michael Norman, University of Illinois at Urbana-Champaign

Lennart Johnson, Thinking Machines Corp./Harvard University

Abstract:

The development of a three-dimensional, time-explicit magnetohydrodynamics (MHD) and radiation hydrodynamics (RHD) code for the numerical simulation of astrophysical systems on massively parallel computer architectures is proposed. Both the MHD and RHD algorithms have been already developed and tested in two-dimensions on vector supercomputers. Instead, this proposal focuses on the computational techniques and issues required to implement these methods in three-dimensions on massively parallel supercomputers. Fox example, effective algorithms for the solution of large sparse banded matrices will be required. Significant applications of the code to the study of a variety of astrophysical systems is anticipated, including the dynamics of magnetic accretion disks, the dynamics of magnetically and/or radiation driven outflows and winds from young stars, and the dynamics of the solar corona and wind.


Title: Adaptive Parallel Methods for Reactive Fluid Dynamics and Particle Dynamics Driven by Applications to Galaxy Formation Problems

Principal Investigator:

Dinshaw Balsara, University of Illinois at Urbana-Champaign

Co-Investigators:

Daniel Quinlan, University of Colorado, Denver

Steve McCormick, University of Colorado, Denver

Alex Szalay, Johns Hopkins University

Abstract:

This is a proposal for the guest computational investigator part of the NASA NRA on high-performance computing. It represents an effort to procure some level of funding for developing a tool with which to study self-adaptivity in fluid dynamics and particle dynamics. To provide us with a realistic physical system for which to develop practical tools with which to study self-adaptivity, we choose the cosmologically important problem of galaxy formation a as driver for this project. A good solution of the problem requires treatment of the multiphase gas dynamics and a particle phase both simultaneously and in a multi-length scale context. Thus we identify high resolution TVD and ENO schemes and elliptic solvers with good scalability in a parallel context Run time interpretation of the parallelism for these problems is provided by an object-oriented array language called P++ that has been under development by the authors for a while. A key ingredient of P++ is that it abstracts away the parallelism for the used. We describe the development of parallel versions of the schemes mentioned above with the help of P++. We then describe the object-oriented class library, AMR++, that allows adaptive mesh refinement to be done in a parallel context for a large class of solvers that are expected to be of vital interest in several areas of science and engineering. This in turn will allow a study of self-adaptivity of various hyperbolic and elliptic solvers in a very general way in a parallel context.


Title: Development of a Hybrid Plasma Simulation Model on Massively Parallel Computer

Principal Investigator:

Chin Lin, Aurora Science Inc.

Co-Investigators:

James Koga, Aurora Science Inc.

Dan Winske, Los Alamos National Lab

Alyson Thring, Aurora Science Inc.

Abstract:

The purpose of the proposed project is to develop hybrid particle-in-cell (PIC) plasma simulation codes on the CM-5 and Intel Touchstone massively parallel computers. The developed parallel hybrid codes will be used for conducting 2D and 3D simulations of magnetic reconnection processes responsible for magnetic substorms in the Earth's magnetosphere. Due to limitations in computer capabilities, particle modeling of magnetic reconnection phenomena has not been conducted for a realistic geometry. The developed parallel hybrid PIC codes on massively parallel computers will allow space scientists to model large scale space phenomena with a realistic topology within a reasonable time. The simulation results will help understand important energy dissipation processes due to ion dynamics in the magnetotail.


Title: Parallel Multiscale Algorithms for Astrophysical Fluid Dynamics Simulations

Principal Investigator:

Michael Norman, University of Illinois at Urbana-Champaign

Co-Investigators:

Dennis Gannon, Indiana University

Abstract:

Many astrophysical systems exhibit structures on a hierarchy of length scales (e.g., galaxy clusters, molecular clouds, magnetospheres, interstellar shocks). This is due to the range of physical processes involved as well as gravity's long range force. Although we now have robust algorithms for simulating many astrophysical systems, most methods treat only a single scale well. Adaptive mesh refinement (AMR) has been demonstrated to have great potential to extend the dynamic range of simulations and appears to be the most general and amenable to parallel computation. We propose to design implement and optimize a portable AMR system for parallel multiscale and multiphysics simulations in astrophysical fluid dynamics on two NASA distributed MIMD architectures--CM-5 and the Intel Touchstone Delta. The driving application will be the formation of galaxies and large scale structures in the universe, for which a separate PI-group proposal is being submitted.


Title: Parallel Algorithms for the Simulation of Protobiological Membranes

Principal Investigator:

Andrew Pohorille, NASA/Ames Research Center

Co-Investigators:

Sherwood Chang, NASA/Ames Research Center

Michael Wilson, NASA/Ames Research Center

Wilson Ross, NASA/Ames Research Center

Abstract:

Parallel algorithms for the computer simulation of multi-component, anisotropic systems will be developed. The parallel link-cells (PLC) algorithm has proved effective in large-scale simulations of isotropic, atomic systems. Here, it is generalized to provide good load-balancing on massively parallel architectures for complex systems. To test the efficiency of this algorithm, a molecular-level simulation of a protocellular vesicle will be carried out. The organization of prebiotic organic matter into protocellular structures was one of the fundamental steps in the origins of life. With the development of tera-flop computers, it should be possible to simulate these structures directly. This would represent a qualitative step forward from molecular to cellular simulations.


Title: Real-time Multi-Spectral Fusion for Feature Based Querying of Massive Databases

Principal Investigator:

Srinivasan Raghavan, LNK Corporation, Inc.

Co-Investigators:

Laveen Kanal, LNK Corporation, Inc.

Abstract:

Massive image databases, designed in an object-oriented fashion, require feature-based archival and retrieval mechanisms. To facilitate high-speed information querying and retrieval, real-time feature extraction from heterogeneous sources is needed. LNK, in this Guest Computational Investigator (GCI) effort, proposes to develop real-time automatic feature extraction techniques using multi-spectral imagery and geographic domain specific knowledge. We propose to make use of a three-tiered hybrid architecture of neural networks and expert systems for integrating the various heterogeneous sources of data including multi-spectral imagery and priori geographic information. A number of HPCC issues including a real-time parallel implementation of the neural networks for a small-scale prototype system, scalability and bottlenecks between the query mechanism and the database are expected to be addressed. We anticipate that with the cooperation of the Intelligent Data Management group of scientists at NASA/GSFC our system can be directly interfaced with the EOS multi-spectral database at GSFC.


Title: Application of Massively Parallel Processing Techniques for Least Squares Problems in Satellite Gravity Gradiometry

Principal Investigator:

Bob Schutz, University of Texas, Austin

Abstract:

The determination of a high resolution, global Earth gravity field model is a critical component of Earth system science studies. The future gravity gradiometer missions are expected to lead to a large advance in our knowledge of the Earth gravity field. The determination of large geopotential expansions from gravity gradients will require considerable computation expense, in terms of both execution time as well as memory. It is the purpose of this research project t examine the application of coarse-grained massively parallel processing techniques to the development of algorithms for the estimation of high resolution geopotential from gravity gradients data. The principal focus is the development of problem wide parallelism, leading to overall high performance least squares algorithms, applicable to other, similar problems in satellite geodesy as well.


Title: Stellar/Solar Convection Simulations

Principal Investigator:

Robert Stein, Michigan State University

Abstract:

Convection is crucial to many astrophysical phenomena. It transports energy and angular momentum, drives dynamo action and mixes fluid in the envelopes of cool stars, the cores of massive stars, accretion disks, and nova explosions. It is inherently non-linear, three-dimensional and multi-scale. To include a reasonably large dynamic range, it needs to be simulated on massively parallel computers. We will focus on realistic simulations of stellar convection, including the effects of ionization, excitation, radiation, rotation and magnetic fields. We will develop algorithms to simulate compressible magneto-convection efficiently on massively parallel computers and visualization tools to analyze the results. These codes will provide scalable test beds for hardware and software.


Title: A Massively Parallel Global Mesoscale Model with Improved Surface-Layer Coupling

Principal Investigator:

Eugene Takle, Iowa State University

Co-Investigators:

John Gustafson, Iowa State University

Abstract:

We proposed to upgrade and transform a community-accepted mesoscale meteorological model with orography at scales of tens of kilometers. We will improve its surface-layer coupling by use of an analytical solution for the lower layer (lowest 150m) for improved simulations of stable flow and make it scalable to global dimensions. We will evaluate the opportunities and problems of scalability to global dimensions by running test cases on three machines in the Scalable Computing Laboratory. This will allow us to examine issues such as the relationship between communication costs and computation costs, load balancing, and synchronization costs. We intend to test the parallelized code on simulations of water-vapor transport into and precipitation in the Central US.


Title: Development of a General-Purpose Accurate Radiative Transfer Model on Parallel Computing Architectures

Principal Investigator:

Zhengming Wan, University of California, Santa Barbara

Abstract:

A general-purpose multiple scattering radiative transfer model for atmosphere-land/ocean has been developed for 10 years on different computers, and used in the thermal infrared range for development of land-surface temperature (LST) algorithms and in the UV and visible ranges for study of the effect of depleted ozone on UV radiation on Antarctica. This model has several parallelisms in nature. The proposed development of this model on parallel computing architectures will take advantage of parallel CPU's and high speed disks to achieve an optimal efficiency and a higher radiance accuracy in the 0.1% - 2% range in order to meet the EOS specifications for SST accuracy at 0.2K and for LST accuracy at 1K and to find more applications in the global change research program.


Title: Parallel Image Compression by Neural Networks

Principal Investigator:

David Yun, University of Hawaii, Manoa

Co-Investigators:

Hui Liu, University of Hawaii, Manoa

Woei Lin, University of Hawaii, Manoa

Abstract:

Image compression plays an essential role for both data transmission and storage. Vector quantization (VQ) is a powerful technique to compress images, however, conventional VQ design methods suffer from high computational complexity and large memory requirement. In this project, a real-time adaptive image compression technique using competitive learning neural networks will be developed and implemented on massively parallel computing systems. Based on our previous research work on adaptive VQ for image compression, neural network model to find maximum point in constant time, and mapping various neural network models to transputer, significant speed-up and scale-up are expected to meet NASA's need of real-time video and image compression. In addition, the significant relationships between architectural characteristics and parallelization processes will be systematically investigated in order to allow mapping algorithms into target machines to scale up and achieve optimal performance. The project includes three phases: (1) algorithms design and implementation on a 32 processor Meiko Transputer system; (2) an ideal parallel architecture will be proposed for adaptive VQ and a few testbed architectures selected to implement the algorithms, relaxing ideal conditions to testbed architectures; (3) an implementation of the real-time image compression algorithm the maximally utilizes available architectural testbeds for speedup and scalability.


Title: Dense Linear Algebra Algebra Algorithms Based on Parallel BLAS

Principal Investigator:

Robert Alexander van de Geijn, University of Texas, Austin

Abstract:

We propose to approach the development of dense linear algebra libraries for distributed memory MIMD architectures through the use of "Parallel" BLAS, allowing a global index space and therefore reliability and ease of use, while retaining performance and scalability.

First Gov . gov NASA Logo - nasa.gov