mcs | publications | abstracts

1990 Abstracts of MCS Reports and Preprints


Reports

F. V. Atkinson, ``Self-Intersecting Solutions of the Prescribed Mean Curvature Equation,'' Proc. of the Focused Research Program on Spectral Theory and Boundary Value Problems: Nonlinear Differential Equations, Vol. 4, ANL-87-26 (August 1989), pp. 1-20. Inequalities and asymptotic estimates are obtained for solutions of the prescribed mean curvature equation in a group of cases in which f(u) is positive, increasing, and unbounded. This system is the intrinsic version of the radially symmetric prescribed mean curvature equation and permits the study of certain generalized solutions of this equation.
F. V. Atkinson and L. A. Peletier, ``Bounds for Vertical Points of Solutions of Prescribed Mean Curvature Type Equations,'' Proc. of the Focused Research Program on Spectral Theory and Boundary Value Problems: Nonlinear Differential Equations, Vol. 4, ANL-87-26 (August 1989), pp. 21-42. Conditions are obtained under which radial solutions of generalized capillary-type equations exhibit a first vertical point, together with bounds and asymptotic estimates for these points.
F. V. Atkinson and L. A. Peletier, ``Large Solutions of Elliptic Equations Involving Critical Exponents,'' Proc. of the Focused Research Program on Spectral Theory and Boundary Value Problems: Nonlinear Differential Equations, Vol. 4, ANL-87-26 (August 1989), pp. 43-72. This paper focuses on obtaining precise estimates of positive radial solutions of particular elliptic equations.
A. Bayliss, B. J. Matkowsky, and M. Minkoff, ``Bifurcation and Pattern Formation in Combustion,'' Ordinary and Partial Differential Equations, ed. B. D. Sleeman and R. J. Jarvis, John Wiley and Sons, N.Y., 1989, pp. 1-51. The authors have developed an adaptive pseudo-spectral method that is both very accurate and very efficient. The algorithm allows them to describe the solution on bifurcation branches in combustion, far beyond the region where analytical methods work well. The analytical results that they obtain are used to choose appropriate parameter values and initial conditions for the numerical computations. In addition, the analytical results serve as benchmarks for the computations. The computations reveal new and interesting behavior. Two problems are discussed, involving solid and gaseous fuel combustion.
C. H. Bischof, ``Incremental Condition Estimation,'' SIAM J. Matrix Anal. Appl. 11, no. 2 (April 1990) 312-322. This paper introduces a technique for estimating the smallest singular value of a dense triangular matrix as it is generated one row or column at a time. Numerical experiments demonstrate that despite its small computational cost, this condition estimator is very reliable. The paper also gives an example that shows the advantage of incorporating the incremental condition estimation strategy into the QR factorization algorithm with column pivoting to guard against failure of the pivoting strategy going unnoticed.
C. H. Bischof and J. J. Dongarra, ``A Linear Algebra Library for High-Performance Computers,'' Parallel Supercomputing: Methods, Algorithms and Applications, ed. G. F. Carey, John Wiley and Sons, N. Y., 1989, pp. 45-56. Argonne, the Courant Institute, and NAG are developing a transportable linear algebra library in Fortran 77. The library (called LAPACK) will provide routines for solving systems of simultaneous linear equations, least-squares solutions of overdetermined systems of equations, and eigenvalue problems. The new library will be based on the successful EISPACK and LINPACK libraries. It will be designed to be efficient and transportable across a wide range of computing environments, with special emphasis on modern high-performance computers. The new library will be distributed over a system similar to netlib, at no or minimal cost.
R. Butler, I. Foster, A. Jindal, and R. Overbeek, ``A High-Performance Parallel Theorem Prover,'' Proc. 10th International Conference on Automated Deduction, in Lecture Notes in Artificial Intelligence, Vol. 449, ed. M. E. Stickel, Springer-Verlag, Berlin, 1990, pp. 649-650 (also MCS-P139-0290). This paper reports on the implementation of a parallel theorem proving system that supports hyper-resolution, subsumption, and the set of support strategy. The upper level of the program, written in the concurrent programming language Strand, implements the general algorithm and describes the interconnectivity of processes that perform the theorem-proving tasks such as resolution and subsumption. These compute-intensive tasks are performed by procedures implemented in C. Speedups have been demonstrated on such test problems as SAM's lemma.
M. T. Chu and G. H. Guirguis, ``A Numerical Method for Solving Interface Problems Arising in Two-Point Boundary Value Problems,'' Computer Methods in Appl. Mech. and Engrg. 74 (1989) 99-113 (also MCS-P31-1288). A numerical approach iterating on the position of interface points is suggested for solving interface problems arising in two-point boundary value problems (BVPs). Given an interface problem, it is decomposed into several standard local boundary value problems that are coupled at the interface points. Meanwhile, a nonlinear equation involving the interface points is formulated from the interface conditions. The advantages of this approach are that the boundary conditions for each local problem can easily be selected by considering the natural physical requirements and that each of the local problems can be solved independently by standard numerical BVP techniques.
W. J. Cody, ELEFUNT Test Results under AST Fortran V1.8.0 on the Sequent Symmetry, MCS-TM-138 (July 1990). This report discusses testing of the floating-point arithmetic and of the elementary function libraries under AST Fortran on a 24-processor Sequent Symmetry computer. The programs MACHAR and PARANOIA were used to check the quality of arithmetic, and the ELEFUNT suite of programs from the book Software Manual for the Elementary Functions by Cody and Waite was used to check function performance. Two complete sets of tests were run, one for each type of floating-point processor, Intel 80387 and Weitek 1167, on the machine.
W. J. Cody, ``Performance Evaluation of Programs for the Error and Complementary Error Functions,'' TOMS 16 (1990) 29-37 (also MCS-P13-0988). This paper presents methods for performance evaluation of computer programs for the the error and complementary error functions. Accuracy estimates are based on comparisons using power series expansions and an expansion in the repeated integrals of erfc (x). Some suggestions for checking robustness are also given. Details of a specific implementation of a test program are included.
W. R. Cowell, ``An Introduction to VecPar_77,'' NAG Newsletter '90, no. 1 (1990) 13-21. VecPar77 is a precompiler tool that analyzes and transforms Fortran 77 programs intended for execution on machines with vector or shared-memory multitasking capabilities. In particular, the tool provides support for the task of modifying existing programs, written originally for serial execution, so that they perform well on advanced-architecture machines. The operation of VecPar77 is divided into two separately invoked phases. In Phase 1, two kinds of Fortran program analysis are performed: syntactic/semantic analysis to detect errors and anomalous constructions in a program, and Do-loop parallelization analysis. In Phase 2, VecPar77 acts as a specialized Fortran-intelligent editor, enabling a user to apply source-to-source transformations aimed at producing Fortran that compiles into code in which the number of references to main memory is reduced. VecPar77 is a direct descendent of Toolpack. It was created by a project jointly supported by the Numerical Algorithms Group, Inc., and the Department of Commerce and Community Affairs of the State of Illinois.
W. R. Cowell and C. P. Thompson, ``Tools to Aid in Discovering Parallelism and Localizing Arithmetic in Fortran Programs,'' Software--Practice and Experience,20 (January 1990) 25-47. The authors describe a collection of software tools that analyze and transform Fortran programs. The analysis tools detect parallelism in blocks of code and are primarily intended to aid in adapting existing programs to execute on multiprocessors. The transformation tools are aimed at eliminating data dependencies, thereby introducing parallelism, and at localizing arithmetic in registers, of primary interest in adapting programs to execute on machines that can be memory bound (common for machines with vector architectures). The tools are unified conceptually by their use of a set of conditions for data independence; these conditions have been implemented so as to combine tool analysis with user/tool interaction. Included are timing results from applying the tools to programs intended for execution on two machines with different architectures--a Sequent Balance and a CRAY-2. The tools are written in Fortran in the tool-writing environment provided by Toolpack and are easily incorporated into a Toolpack installation.
J. Dongarra and E. Lusk, ``Summer Institute in Parallel Computing: September 5-15, 1989,'' MCS-TM-136 (December 1989). This report summarizes the objectives of the Summer Institute in Parallel Computing held at Argonne in September 1989.
J. J. Dongarra, D. C. Sorensen, and S. J. Hammarling, ``Block Reduction of Matrices to Condensed Forms for Eigenvalue Computations,'' J. Comp. and Appl. Math. 27 (1989) 215-227. This paper describes block algorithms for the reduction of a real symmetric matrix to tridiagonal form and for the reduction of a general real matrix to either bidiagonal or Hessenberg form by using Householder transformations. The approach is to aggregate the transformations and to apply them in a blocked fashion, thus achieving algorithms that are rich in matrix-matrix operations. These reductions typically are a preliminary step in the computation of eigenvalues or singular values. The authors demonstrate how the initial reduction to tridiagonal or bidiagonal form may be pipelined with the divide-and-conquer technique for computing the eigensystem of a symmetric matrix or the singular value decomposition of a general matrix to achieve algorithms that are load balanced and rich in matrix-matrix operations.
I. Foster, ``Automatic Generation of Self-Scheduling Programs,'' IEEE Trans. on Parallel and Distributed Computing, to appear (also MCS-P143-0390). The author describes techniques for the automatic generation of self-scheduling parallel programs. A high-level language is used to express both concurrent components of applications and scheduling algorithms. Partitioning and data dependency information are expressed by simple control statements. An automatic source-to-source transformation takes application code, control statements, and scheduling routines and generates a new program that can schedule its own execution of a parallel computer. The approach has several important advantages compared to previous proposals. It generates programs that are portable over a wide range of parallel computers. There is no need to embed special control structures in application programs. Finally, the use of a high-level language to express applications, load-balancing algorithms, and transformations facilitates the development, modification, and reuse of parallel programs.
I. Foster, ``A Multicomputer Garbage Collector for a Single-Assignment Language,'' International J. Parallel Prog., to appear (also MCS-P81-0689). This paper describes an asynchronous garbage collector for a message-passing multiprocessor. This combines Weighted Reference Counting interprocessor collection and tracing intraprocessor collection to permit individual processors to reclaim local storage independently. A novel feature is the integration of Weighted Reference Counting collection and the communication algorithms required to support a global address space in a single assignment language. This significantly reduces communication overhead and space requirements attributable to garbage collection.
I. Foster and R. O. Overbeek, ``Experiences with Bilingual Parallel Programming,'' Proc. DMCC5 '90, to appear. Parallel programming requires tools that simplify the expression of complex algorithms, provide portability across different classes of machine, and allow reuse of existing sequential code. The authors have previously proposed bilingual programming as a basis for such tools. In particular, they have proposed the use of a high-level concurrent programming language (such as Strand) to construct parallel programs from (possibly pre-existing) sequential components. Here they report on an applications study intended to evaluate the effectiveness of this approach. They describe experiences developing both new codes and parallel versions of existing codes in computational biology, weather modeling, and automated reasoning. The bilingual approach encourages the development of parallel programs that perform well, are portable, and are easy to maintain.
I. Foster and R. Stevens, ``Parallel Programming with Skeletons,'' Proc. ICPP '90, to appear (also MCS-P124-0190). The authors describe an approach to parallel program development based on the use of algorithmic motifs: useful parallel program structures that users can combine with application-specific routines to construct parallel programs. Key features are the use of a high-level concurrent programming language and automated source-to-source transformations to implement motifs. These features support the construction of new motifs from old by both modification and composition. This is turn encourages an exploratory programming style, in which programmers experiment to develop good parallel implementations of applications. The use of a high-level language encourages the user to view libraries implementing motifs as archives of expertise that can be consulted, modified, and extended when developing parallel programs. Examples of the approach are given.
I. Foster, C. Kesselman, and S. Taylor, ``Concurrency: Simple Concepts and Powerful Tools,'' Computer Journal, to appear Dec. 1990. This paper describes programming language concepts that we have found useful in applying stepwise refinement to parallel programs. The concepts allow decisions concerning program structure to be delayed until late in the design process. This capability permits rapid experimentation with alternative structures and leads to both portable and scalable code. Although simple, the concepts form a sufficient basis for the construction of powerful programming tools. Both concepts and tools have been applied successfully in a wide variety of applications and are incorporated in a commercial concurrent programming system Strand.
M. Garbey, H. G. Kaper, G. K. Leaf, and B. J. Matkowsky, ``Nonlinear Analysis of Condensed-Phase Surface Combustion,'' Euro. J. Appl. Math. 1 (1990) 73-89 (also MCS-P82-0589). This article is concerned with the structure and stability properties of a combustion front that propagates in the axial direction along the surface of a cylindrical solid fuel element. The fuel consists of a mixture of two finely ground metallic powders, which combine upon ignition in a one-step chemical reaction. The reaction is accompanied by a melting process, which in turn enhances the reaction rate. The combustion products are in the solid state. The reaction zone, inside which the melting occurs, is modeled as a front that propagates along the surface of the cylinder. The different modes of propagation that have been observed experimentally (such as single- and multiheaded spin combustion and multiple-point combustion) are explained as the results of bifurcations from a uniformly propagating plane circular front. The stability properties of the various modes are discussed.
A. Griewank, ``On Automatic Differentiation,'' in Mathematical Programming: Recent Developments and Applications, ed. M. Iri and K. Tanabe, Kluwer Academic Publishers, Boston, 1989, pp. 83-107 (also MCS-P10-1088). In comparison to symbolic differentiation and numerical differencing, the main rule-based technique of automatic differentiation is shown to evaluate partial derivatives accurately and cheaply. In particular, it is demonstrated that the reverse mode of automatic differentiation yields any gradient vector at no more than five times the cost of evaluating the underlying scalar function. After developing the basic mathematics, the author describes several software implementations and briefly discuss the ramifications for optimization.
C. P. Gupta, ``A Fourth-Order Nonlinear Boundary Value Problem and an Integral Type Sign Condition,'' Applied Math. and Computation 35 (1990) 231-242 (also MCS-P72-0489). This paper focuses on a fourth-order nonlinear boundary value problem associated to the unstable static equilibrium of an elastic beam which is supported by sliding clamps when the nonlinearity satisfies an integral sign condition instead of the usual asymptotic pointwise sign condition. A necessary and sufficient condition is given for the existence of a solution when the nonlinearity is nondecreasing.
C. P. Gupta, ``A Nonlinear Boundary Value Problem Associated with the Static Equilibrium of an Elastic Beam Supported by Sliding Clamps,'' Internat. J. Math. & Math. Sci. 12, no. 4 (1989) 697-712 (also MCS-P24-1188). A fourth-order boundary problem describes the unstable static equilibrium of an elastic beam that is supported by sliding clamps at both ends. This paper concerns the nonlinear analogue of this boundary value problem. Some resonance and nonresonance conditions on the asymptotic behavior are studied for the existence of solutions of this nonlinear boundary value problem.
C. P. Gupta, ``Nonlinear Second Order System of Neumann Boundary Problems at Resonance,'' JAMS 2, no. 3 (Fall 1989) 169-184 (also MCS-P74-0489). This paper focuses on a nonlinear Neumann boundary value problem at resonance. Asymptotic conditions on the nonlinearity are presented to give existence of solutions for the nonlinear systems. The methods apply to the corresponding system of Lienard-type periodic boundary value problems.
H. G. Kaper and A. M. Krall, ``M(lambda)-Computation for Singular Differential Systems,'' Proc. Roy. Soc. Edinburgh 112A (1989) 327-330. Depending on the initial data associated with the fundamental matrix, the function M(lambda) may vary. It is shown that there is a matrix bilinear transformation between such functions M(lambda) with different initial data. The authors also show how the result can be used to simplify the calculation of a specific M(lambda)-function for a scalar second-order problem.
H. G. Kaper and Man Kam Kwong, ``Free Boundary Problems for Emden-Fowler Equations,'' Diff. and Integral Eqs. 3, no. 2 (March 1990) 353-362 (also MCS-P8-0988). This article is concerned with free boundary problems for differential equations of the Emden-Fowler type. In particular, it addresses the questions of existence and uniqueness of a finite point P and a solution u satisfying conditions u'(0) = 0; u(x) > 0, 0
H. G. Kaper and M. K. Kwong, ``A Non-Oscillation Theorem for the Emden-Fowler Equation; Ground States for Semilinear Elliptic Equations with Critical Exponents,'' J. Diff. Eqs. 75 (1988) 158-185 (also Proc. of the Focused Research Program on Spectral Theory and Boundary Value Problems: Nonlinear Differential Equations, Vol. 4, ANL-87-26 (August 1989), pp. 141-167). The authors show that the Emden-Fowler equation is nonoscillatory at infinity under certain conditions. The result improves earlier oscillation results of Nehari and Chiou.
M. K. Kwong, ``On the Kolodner-Coffman Method for the Uniqueness Problem of Emden-Fowler BVP,'' J. App. Math. Phys (ZAMP) 41 (January 1990) 79-104 (also MCS-P44-1089). This paper surveys the Kolodner-Coffman method which has been applied very successfully by many authors to obtain uniqueness theorems for boundary value problems for differential equations of Emden-Fowler type. The author recasts some of the arguments in the language of Sturm's oscillation theory for linear second-order differential equations. The clarification provided by the new perspective makes it possible to improve on several known results.
M. K. Kwong and M. B. Dever, ``Computer-aided Study of a Problem in Hermitian Matrix Theory,'' J. Symbolic Computation 9 (1990) 87-112.
E. Lusk et al., ``The Aurora Or-Parallel Prolog System,'' New Generation Computing 7 (1990) 243-271 (a modified version of MCS-P122-0190). Aurora is a prototype or-parallel implementation of the full Prolog language for shared-memory multiprocessors, developed as part of an informal research collaboration known as the Gigalips Project. It currently runs on Sequent and Encore machines. It has been constructed by adapting Sistus Prolog, a fast, portable, sequential Prolog system. The techniques for constructing a portable multiprocessor version follow those pioneered in a predecessor system, ANL-WAM. The SRI model was adopted as the means to extend the Sistus Prolog engine for or-parallel operation. This paper describes the design and main implementation features of the current Aurora system, and presents some experimental results. For a range of benchmarks, Aurora on a 20-processor Sequent Symmetry is 4 to 7 times faster than Quintus Prolog on a Sun 3/75. Good performance is also reported on some large-scale Prolog applications.
J. N. Lyness and I. H. Sloan, ``Some Properties of Rank-2 Lattice Rules,'' Math. Comp. 53, no. 188 (October 1989) 627-637. This paper discusses lattice rules in general. In particular, the authors categorize a special subclass, whose leading one- and two-dimensional projections contain the maximum feasible number of abscissas. They show that rules of this subclass can be expressed in a simple tricycle form.
J. N. Lyness and T. Sorevik, ``The Number of Lattice Rules,'' BIT 29 (1989) 527-534 (also MCS-P17-1088). A lattice rule is a quadrature rule for integration over an s-dimensional hypercube that employs N abscissas located on a lattice, chosen to conform to certain specifications. In this paper, the authors determine the number of distinct N-point s-dimensional lattice rules.
W. McCune, ``OTTER 2.0,'' Proc. 10th International Conference on Automated Deduction, in Lecture Notes in Artificial Intelligence, Vol. 449, ed. M. E. Stickel, Springer-Verlag, Berlin, 1990, pp. 663-664. OTTER 2.0 is Argonne's current production theorem prover. It is a descendent of AURA and ITP, and operates on clauses in batch mode. OTTER's strong points include hyperresolution and UR-resolution in small (i.e., one with few axioms) theories represented by Horn sets and near-Horn sets. Another strong point is paramodulation with sets of unit equalities. The program is free and very portable; it is available electronically as well as on tape or diskette. No formal support is provided, however.
W. W. McCune, ``OTTER 2.0 Users Guide,'' ANL-90/9 (March 1990). OTTER is a resolution-style theorem-proving program for first-order logic with equality. It includes the inference rules binary resolution, hyperresolution, UR-resolution, and binary paramodulation. Some of its other abilities are conversion from first-order formulas to clauses, forward and back subsumption, factoring, weighting, answer literals, term ordering, forward and back demodulation, and evaluable functions and predicates. OTTER is coded in C and is portable to a wide variety of computers.
J. J. More', ``Gradient Projection Techniques for Large-Scale Optimization Problems,'' Proceedings of the 28th IEEE Conference on Decision and Control, Tampa, Florida, December 1990, Volume 1, pp. 378-381. We briefly describe the impact of the identification properties of the gradient projection method on algorithms for the solution of large-scale optimization problems with bound constraints. In particular, we describe recent results of More' and Toraldo which show that the gradient projection method can be combined with the conjugate gradient method to solve an important class of large-scale (number of variables between 5000 and 15000) bound constrained quadratic programming problems with a few (less than 15) iterations.
J. J. More', ``A Collection of Nonlinear Model Problems,'' Lectures in Applied Math. 26 (1990) 723-762 (also MCS-P60-0289). This paper presents a collection of nonlinear problems. The aim of this collection is to provide model problems of scientific interest to researchers interested in algorithm development. The problems and their origin are described, and references are given to other papers that have dealt with these problems.
G. W. Pieper, ed., Activities and Operations of the Advanced Computing Research Facility: January 1989 through January 1990, ANL-90/12 (February 1990). This report highlights the activities and operations that have contributed to the success of the ACRF during the period January 1989 through January 1990.
G. W. Pieper, ed., Research in Mathematics and Computer Science at Argonne: January 1988 - August 1989, ANL-89/12 (August 1989). This report highlights the research activities of the MCS Division during the period January 1988 through August 1989.
G. W. Pieper, ed., Joint Japanese-American Workshop on Future Trends in Logic Programming, ANL-89/43 (December 1989)(280 pages). This report summarizes the activities related to the Joint Japanese-American Workshop on Future Trends in Logic Programming, held at Argonne on October 11-13, 1989. Four main areas were considered: multiprocessing, constraint logic programming, logical foundation of programming, and knowledge representation and database. This report includes abstracts of the 16 presentations, copies of the papers presented, and summaries of the discussion sessions. An important part of the workshop was the demonstration of the Japanese fifth-generation computer; included in this report is a detailed guide to the Japanese demonstration.
J. K. Slaney and E. L. Lusk, ``Parallelizing the Closure Computation in Automated Deduction,'' Proc. 10th International Conference on Automated Deduction, in Lecture Notes in Artificial Intelligence, Vol. 449, ed. M. E. Stickel, Springer-Verlag, Berlin, 1990, pp. 28-39 (also MCS-P123-0190). This paper presents a parallel algorithm for computing the closure of a set under an operation. This type of computation has been used in automated theorem proving, abstract algebra, and formal logic. The algorithm is particularly suited for shared-memory parallel computers, where it makes possible economies of space. Implementations of the algorithm in two application contexts are described and experimental results given.
I. H. Sloan and J. N. Lyness, ``Lattice Rules: Projection Regularity and Unique Representations,'' Math. of Comp. 54, no. 190 (April 1990) 649-660 (also MCS-P75-0489). This paper introduces a unique characterization for lattice rules that are projection regular.
P. T. T. Tang, Some Software Implementations of the Functions Sine and Cosine, ANL-90/3 (April 1990). This document presents several software implementations of the elementary functions sin and cos designed to fit a large class of machines. Implementation details are provided. Also provided is a detailed error analysis that bounds the errors of these implementations, over the full range of input arguments, from 0.721 to 0.912 units in the last place. Tests performed on these codes give results that are consistent with the error bounds.
S. Winker, R. Overbeek, C. R. Woese, G. J. Olsen, and N. Pfluger, An Automated Procedure for Covariation-based Detection of RNA Structure, ANL-89/42 (December 1989). This paper summarizes investigations into the computational detection of secondary and tertiary structure of ribosomal RNA. The authors have developed a new automated procedure that not only identifies potential bondings of secondary and tertiary structure, but also provides the covariation evidence that supports the proposed bondings, and any counterevidence that can be detected in the known sequences. A small number of previously unknown bondings have been detected in individual RNA molecules through the use of the new procedure.
L. Wos, ``Applications of Automated Reasoning,'' Knowledge Engineering, Vol. II: Applications, <\em> ed. H. Adeli, McGraw-Hill, New York, 1990, pp. 56-83. This article focuses on how the field of automated reasoning was founded, why it was extended, what caused it to grow, and what successes have been realized by using automated reasoning programs.
L. Wos, S. Winker, W. McCune, R. Overbeek, E. Lusk, R. Stevens, and R. Butler, ``Automated Reasoning Contributes to Mathematics and Logic,'' Proc. 10th International Conference on Automated Deduction, <\em> in Lecture Notes in Artificial Intelligence, Vol. 449, <\em> ed. M. E. Stickel, Springer-Verlag, Berlin, 1990, pp. 485-499. This article presents results of research focusing on the use of the automated reasoning program OTTER to prove theorems from Robbins algebra, equivalential calculus, implicational calculus, combinatory logic, and finite semigroups. Included are answers to open questions and new shorter and less complex proofs to known theorems. To obtain these results, the authors used a paradigm that relies heavily on demodulation, subsumption, set of support, weighting, paramodulation, hyperresolution, and UR-resolution. Examples are given that show success when the paradigm is used, failure when it is not.
L. Wos, ``Meeting the Challenge of Fifty Years of Logic,'' JAR 6, no. 2 (1990) 213-232 (also MCS-P129-0290). This paper discusses how experiments with a new representation and a new use of the weighting strategy resulted in obtaining proofs of 13 theorems in logic. The author presents these theorems as challenging problems to test the power of a reasoning program or to evaluate the effectiveness of a new idea. In addition, the author discusses a possible approach to finding shorter proofs and presents new results.
L. Wos, ``The Problem of Choosing between Logic Programming and General-Purpose Automated Reasoning,'' JAR 6, no. 1 (1990) 77-78 (also MCS-P130-0290). This article focuses on finding criteria for correctly choosing between using logic programming and a more general automated reasoning approach to attack a given assignment. The problem proposed for research asks one to find criteria that classify problems as solvable with a well-focused algorithm or as requiring a more general search for new information. Included are suggestions for evaluating a proposed solution to this research problem.
L. Wos, ``The Problem of Finding a Mapping between Clause Representation and Natural-Deduction Representation,'' JAR 6, no. 2 (1990) 211-212 (also MCS-P133-0290). The focus of this paper is on the relationship between two approaches to the automation of reasoning: the paradigm based on the clause language and that based on natural deduction. The problem proposed for research asks one to find a mapping between the two approaches such that reasoning programs based on either approach perform identically on a specific assignment. For evaluating a proposed solution to this research problem, the author includes suggestions concerning possible test problems.
L. Wos, ``The Problem of Guaranteeing the Existence of a Complete Set of Reductions,''JAR 5, no. 3 (1989) 399-401 (also MCS-P66-0389). This article focuses on finding criteria for guaranteeing the existence of a complete set of reductions. Included is a suggestion for evaluating a proposed solution to this research problem.

Preprints

C. H. Bischof and P. C. Hansen, ``Structure-Preserving and Rank-Revealing QR-Factorization,'' MCS-P100-0989. The rank-revealing QR factorization is a special QR factorization guaranteed to reveal the numerical rank of the matrix in consideration. Thus, it is useful in the numerical treatment of rank-deficient least-squares problems. This paper presents a new, efficient method for computing such a factorization, which seeks to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to safely capture the numerical rank. The method uses incremental condition estimation and restricted pivoting in a forward pass to compute an approximation to the numerical rank and null space. This pass can be tailored for the zero/nonzero structure of the problem at hand. In a second pass, the algorithm of Chan is used to refine the estimates for the numerical rank and null space. Numerical experiments show that the new algorithm compares favorably with existing algorithms, in terms of both fill-in and the number of floating-point operations and memory accesses required.
C. H. Bischof and G. M. Shroff, ``On Updating Signal Subspaces,'' MCS-P101-0989. This paper presents an algorithm to adaptively solve signal processing problems using the signal subspace approach. The noise subspace is estimated using a rank-revealing QR factorization instead of the more expensive singular value or eigenvalue decompositions. The authors show how to inexpensively update a rank-revealing triangular factorization of a matrix when new rows are added and old rows deleted. An incremental condition estimation technique is used to exploit the structure of past solutions during the updating process while maintaining the rank-revealing property of the factorization.
J. J. More', ``Gradient Projection Techniques for Large-Scale Optimization Problems,'' MCS-P102-0989. This paper briefly describes the impact of the identification properties of the gradient projection method on algorithms for the solution of large-scale optimization problems with bound constraints. In particular, the authors describe recent results of More' and Toraldo that show that the gradient projection method can be combined with the conjugate gradient method to solve an important class of large-scale (number of variables between 5000 and 15000) bound-constrained quadratic programming problems with a few (less than 15) iterations.
A. K. Singh and R. Overbeek, ``Derivations of Efficient Parallel Programs: An Example from Genetic Sequence Analysis,'' MCS-P104-0989. Implementation issues such as synchronization of shared data, implementation of abstract data types, and scheduling of processes are usually not addressed in the formal derivation of parallel programs. Here the authors seek to redress the situation by considering these issues in the context of developing an efficient implementation of an actual parallel program. The computational problem selected was motivated by work in aligning sequences of genetic material. The authors first develop an algorithm in Unity and then investigate the issues that arose in producing an efficient C implementation. Additionally, the authors present some theorems about program refinements and discuss the usefulness of the theorems in the context of refining the original Unity program.
C. H. Bischof and J. J. Dongarra, ``A Project for Developing a Linear Algebra Library for High-Performance Computers,'' MCS-P105-0989. This paper summarizes the objectives of the LAPACK project, its use of the BLAS and block algorithms, documentation issues, and the programming language and style. The library is intended to be usable on all of the most powerful computers currently available for general-purpose scientific computing.
C. H. Bischof, J. G. Lewis, and D. J. Pierce, ``Incremental Condition Estimation for Sparse Matrices,'' MCS-P106-0989. Incremental condition estimation provides an estimate for the smallest singular value of a triangular matrix. In particular, it gives a running estimate of the smallest singular value of a triangular factor matrix as the factor is generated one column or row at a time. An incremental condition estimator for dense matrices was originally suggested by Bischof. In this paper the scheme is generalized to handle sparse triangular matrices, especially those that are factors of sparse matrices. Numerical experiments on a variety of matrices demonstrate the reliability of this scheme in estimating the smallest singular value. A partial description of its implementation in a sparse matrix factorization code further illustrates its practicality.
M. Garbey, ``Asymptotic Analysis of Singular Perturbation Problems Governed by a Conservation Law,'' MCS-P107-1089. This paper presents an asymptotic analysis of a quasi-linear parabolic-hyperbolic singular perturbation problem in one dimension. The main part of the analysis concerns the construction of the transition layers and boundary layers associated with the discontinuities of a conservation law. An example is given of an oil problem modeled by Le Fur.
G. H. Chisholm, J. Kljaich, B. T. Smith, and A. S. Wojcik, ``A Paradigm for Analyzing Independence in System Behavior by Using Formal Methods,'' MCS-P108-1089. This paper introduces a technique based on formal methods, which analyzes dependence by generating dependency lists. Each list represents all the components on which a replicated channel depends for proper operation. The analysis manipulates these lists to determine independence. The technique treats hardware, software, and the interface between them, with the result that the total system can be analyzed with respect to independence properties.
G. H. Chisholm, B. T. Smith, and A. S. Wojcik, ``Using Formal Methods to Develop a Paradigm for Software Design,'' MCS-P109-1089. This paper is a shortened version of MCS-P108.
R. L. Stevens, ``The Interesting Future of Supercomputing,'' MCS-P110-1189. This informal discussion of trends in supercomputing looks briefly at trends in graphics, programming languages, and computer architectures.
G. H. Chisholm, B. T. Smith, J. Kljaich, and A. S. Wojcik, ``Formal Analysis of Software in a Fault-Tolerant Environment,'' MCS-P111-1189. This paper describes the application of modeling and analysis techniques to software that is designed to execute on the Charles Stark Draper laboratory Fault-Tolerant Processor (FTP). The software performs sensor validation of independent signals from the Experimental Breeder Reactor-II operated by Argonne. The authors describe an abstraction of the data flow in terms of the specification of a generic application program for the FTP. They prove the fault-tolerance of the application program to hardware and sensor failures, provided that the validation algorithms have a certain generic structure and functionality. Using a more detailed specification of the application software, they also demonstrate that this program satisfies the sufficient conditions developed for the generic program to claim system fault tolerance.
G. H. Chisholm, B. T. Smith, and A. S. Wojcik, ``An Automated Reasoning Problem Associated with Proving Claims about Programs Using Floyd-Hoare Inductive Assertion Methods,'' MCS-P112-1189. Proving claims about behavior of software is essential for the qualification of computer-based systems used in the control of nuclear reactors. Here the authors select one of the verification conditions for a C program that initializes an array to zero. They add assertions about the initial conditions and state of the program and about the expected behavior of the program in terms of its state. The modeling and specification technique is the inductive assertion technique of Floyd-Hoare. The program with assertions is then transformed by the source-to-source program transformation system TAMPR into a set of separate verification conditions to be proven by the automated reasoning system.
M. K. Kwong, P.-Y. Woo, and R. Wong, ``Using MAPLE for Symbolic Computations in Robotics,'' MCS-P113-1289. Programs are presented for deriving forward kinematics, Jacobians, and dynamic equations of robotic manipulators in symbolic closed forms by using the algebraic manipulation software MAPLE. Examples are given, and efficiency of the computations is discussed.
M. Garbey, H. G. Kaper, G. K. Leaf, and B. J. Matkowsky, ``Using MAPLE for the Analysis of Bifurcation Phenomena in Condensed-Phase Surface Combustion,'' MCS-P114-1189. This article describes the use of the symbolic computation language MAPLE for the analysis of bifurcation phenomena in condensed-phase combustion. The physical problem concerns the structure and stability properties of a combustion front that propagates in the axial direction along the surface of a cylindrical solid fuel element. Experimental observations suggest that the front may propagate in a number of different ways; the objective of the investigation is to describe these different modes. The analysis involves the study of a set of nonlinear partial differential equations which describe the structure and evolution of the combustion front. Because the location of the front is unknown and must be found as part of the solution, the problem is a free boundary problem.
J. Dongarra, O. Brewer, S. Fineberg, and J. A. Kohl, ``A Tool to Aid in the Design, Implementation, and Understanding of Matrix Algorithms for Parallel Processors,'' MCS-P115-1189. This paper discusses a tool that aids in the design, development, and understanding of parallel algorithms for high-performance computers. The tool provides a vehicle for studying memory access patterns, different cache strategies, and the effects of multiprocessors on matrix algorithms in a Fortran setting. Such a tool puts the user in a better position to understand where performance problems may occur and enhances the likelihood of increasing the program's performance before actual execution on a high-performance computer.
H. G. Kaper and M. K Kwong, ``Ground States of Semi-Linear Diffusion Equations,'' MCS-P116-1189. This article is concerned with the uniqueness of nontrivial nonnegative solutions of semilinear diffusion equations in radially symmetric domains.
M. K. Kwong and L. Zhang, ``Uniqueness of the Positive Solution of DELTA u + f(u) = 0,'' MCS-P117-1289. This paper presents an extension of the recent result of Kwong on the uniqueness of the positive radial solution of a semilinear elliptic equation. When reduced to the special case considered by Kwong, the proof is shorter.
M. K. Kwong, ``Uniqueness Results for Emden-Fowler Boundary Value Problems,'' MCS-P118-0190. This paper establishes new uniqueness results for boundary value problems of the superlinear Emden-Fowler type, with either a Dirichlet or Neumann condition at each endpoint. The first result extends a known criterion to nonlinear terms that may change sign. The proof uses the theory of differential inequalities, after changing the independent variable to the quantity u.
N. T. Karonis, ``Timing Parallel Programs That Use Message Passing,'' MCS-P119-0190. A timing methodology is presented that is capable of accurately determining the execution time of parallel programs, This timing methodology is also capable of acquiring timing data when the number of processes exceeds the number of physical CPUs. Through simulation this methodology also provides a means by which to study the behavior of a parallel program under an arbitrary hardware/software communications configuration.
D. Solow and R. S. Womersley, ``A Piecewise Linear Concave Minimization Algorithm for the Linear Complementarity Problem,'' MCS-P120-0190. This paper considers solving the Linear Complementary Problem by finding a global minimizer of a linearly constrained piecewise linear concave function. In contrast to Lemke's algorithm and the principal pivoting algorithms, the algorithm described here finds a feasible point for the linearity constraints first, then works toward satisfying the complementarity condition. Once an extreme point of the feasible region is found, the algorithm moves from one extreme to another while reducing this concave objective.
A. Jindal, R. Overbeek, and W. C. Kabat, ``Exploitation of Parallel Processing for Implementing High-Performance Deduction Systems,'' MCS-P121-0190. This paper describes a scheme for parallelizing first-order logic deduction systems. This scheme has been used for parallelizing OTTER. The new program, PARROT-II, has attained real speedups in excess of 20 over the best results of current sequential deduction systems.
L. M. Ewerbring and F. T. Luk, ``The HK Singular Value Decomposition of Rank-Deficient Matrix Triplets,'' MCS-P125-0190 (to appear in proceedings, published by Springer-Verlag). This paper focuses on a simultaneous reduction of three matrices. The described method is extended from earlier work by Ewerbring and Luk to include rank-deficient data. it is shown how, via an initial reduction, the problem becomes one of diagonalizing a product of three matrices. Three different algorithms are compared.
N. R. Nagarajan, A. S. Cullick, and A. Griewank, ``New Strategy for Phase Equilibrium and Critical Point Calculations by Thermodynamic Energy Analysis. Part I. Stability Analysis and Flash,'' MCS-P126-0190. The use of equations of state to compute phase behavior of petroleum reservoir fluids and their mixtures with injected fluids is well established in the oil industry. The first step prior to performing a two- or multiphase flash calculation for a reservoir fluid is to determine the stable phase condition of the mixture. The stability analysis presented here is based on work by Michelsen but uses a different set of primary variables, the component molar densities in place of mole numbers of mole fractions to improve the numerical stability of the Newton iterative scheme. The new procedure is especially well suited to predict behavior in the critical region where other methods sometimes face convergence problems.
N. R. Nagarajan, A. S. Cullick, and A. Griewank, ``New Strategy for Phase Equilibrium and Critical Point Calculations by Thermodynamic Energy Analysis. Part II. Critical Point Calculations,'' MCS-P127-0190. In enhanced oil recovery processes, miscibility is developed through a multiple contact process via a critical point. Prediction of these critical points is important in modeling the phase behavior of the fluid mixtures for simulating these processes. Tradition equation-of-state prediction of the critical points involves complex computations and is often made difficult because of the extreme sensitivity of fluid properties to small changes in pressure or temperature in the critical region. The authors of this paper present a new strategy for critical point search based on the Gibbs criteria for criticality and thermodynamic energy analysis.
X. Yang, N. J. Zabusky, and I-L. Chern, ``Breakthrough via Dipolar-Vortex/Jet Formation in Shock-Accelerated Density-Stratified Layers,'' MCS-P128-0290. In this paper the authors observe and quantify convective ``breakthrough'' after an inclined planar light fluid layer is struck by a shock wave. The results of direct numerical simulations are interpreted, and a convective breakthrough time is quantified. Initially, vorticity of opposite sign is deposited on each interface of the layer. Breakthrough occurs when these vortex regions approach and bind into a dipolar vortex that moves away from the wall. Variations of the breakthrough time with the Mach number of the incoming shock wave are given.
J. V. Burke, ``A Robust Trust Region Method for Constrained Nonlinear Programming Problems,'' MCS-P131-0190. This paper presents a general framework for trust region algorithms for constrained problems that does not require strong global regularity and that allows very general constraints. The approach is modeled on the one given by Powell for convex composite optimization problems and is driven by linear subproblems that yield viable estimates for the value of an exact penalty parameter. These results are applied to the Wilson-Han-Powell SQP algorithm and Fletcher's algorithm. Local convergence results are also given.
A. Griewank, ``Direct Calculation of Newton Steps without Accumulating Jacobians,'' MCS-P132-0290. In computational practice, nonlinear vector functions whose roots must be calculated are often specified in the form of an evaluation program supplied by the user. This program usually generates a large number of intermediate quantities that are local variables and thus ordinarily not visible to the calling routine. By considering the intermediate quantities as additional independent variables, one obtains an extended nonlinear system, which should be attacked directly for several reasons. First, the extended Jacobian is extremely sparse and can be evaluated at little extra cost simultaneously with the original vector function. Second, any method for computing Newton steps on the original vector function is, in terms of complexity, equivalent to a scheme for solving a linear system in the extended Jacobian. The conditioning of this large sparse matrix reflects not only the stability of the original vector function but also the accuracy of the evaluation algorithm. This allows the design of realistic stopping criteria without user-supplied tolerances.
I-Liang Chern, ``Multiple-Mode Diffusion Waves for Viscous Nonstrictly Hyperbolic Conservation Laws,'' MCS-P134-0290. This paper focuses on the large-time behavior of solutions of viscous conservation laws whose inviscid part is a nonstrictly hyperbolic system. The initial data considered here is a perturbation of a constant state. It is shown that the solutions converge to single-mode diffusion waves in directions of strictly hyperbolic fields, and to multiple-mode diffusion waves in directions of nonstrictly hyperbolic fields. The multiple-mode diffusion waves, which are the new elements here, are the self-similar solutions of the viscous conservation laws projected to the nonstrictly hyperbolic fields, with the nonlinear fluxes replaced by their quadratic parts.
D. Jacobs, A. Langen, and W. Winsborough, ``Multiple Specialization of Logic Programs with Run-Time Tests,'' MCS-P135-0290. This paper presents a framework for multiple specialization of logic programs that incorporates run-time testing. This framework supports specialization based on a compiler-generated ``wish list'' of requirements that enable useful optimizations. In addition, entry mode declarations are used to restrict the class of reachable activations. Our goal is to generate code containing tests at outer levels of the call tree that guard high-performance specialized procedures that are likely to be called.
W. McCune, ``Skolem Functions and Equality in Automated Deduction,'' MCS-P136-0290. This paper presents a strategy for restricting the application of the inference rule paramodulation. The strategy applies to problems in first-order logic with equality and is designed to prevent paramodulation into subterms of Skolem expressions. A weak completeness result is presented (the functional reflexive axioms are assumed). Experimental results on problems in set theory, combinatory logic, Tarski geometry, and algebra show that the strategy can be useful when searching for refutations and when applying Knuth-Bendix completion. The emphasis of the paper is on the effectiveness of the strategy rather than on its completeness.
A. Mulkers, W. Winsborough, and M. Bruynooghe, ``Analysis of Shared Data Structures for Compile-Time Garbage Collection in Logic Programs,'' MCS-P137-0290. One of the central problems in program analysis for compile-time garbage collection is detecting the sharing of term substructure that can occur during program execution. The authors present an abstract domain for representing possibly shared structures and an abstract unification operation based on this domain. When supplied to an abstract interpretation framework, this domain induces a powerful analysis of shared structures.
I-Liang Chern, ``Large-Time Behavior of Solutions of Lax-Friedrichs Finite Difference Equations for Hyperbolic Systems of Conservation Laws,'' MCS-P138-0290. This paper focuses on the large-time behavior of discrete solutions of the Lax-Friedrichs finite-difference equations for hyperbolic systems of conservation laws. The initial data considered here is small and tends to a constant state at x = +- infinity. The author shows that the solutions tend to discrete diffusion waves which can be constructed from the self-similar solutions of the heat equation and the Burgers equation through an averaging process.
J. J. More', ``On the Performance of Algorithms for Large-Scale Bound-Constrained Problems,'' MCS-P140-0290. The paper discusses issues that affect the performance of algorithms for the solution of large-scale bound-constrained problems on parallel computers. The discussion centers on the solution of the elastic-plastic torsion problem and the journal bearing problem. These two problems are model large-scale quadratic programming problems that arise as finite element approximations to elliptic variational inequalities. Performance issues are illustrated with the GPCG algorithm of More' and Toraldo. This algorithm uses the gradient projection method to select an active set and the conjugate gradient method to explore the active set defined by the current iterate. Significant improvements in performance are obtained with the GPCG algorithm by using partitioning techniques in a parallel environment. These partitioning techniques lead to almost linear speedups on function-gradient evaluations and Hessian-vector products for partially separable functions.
G.-Q. Chen, ``Limit Behavior of Approximate Solutions to Conservation Laws,'' MCS-P141-0390. This paper focuses on the limit behavior of approximate solutions to hyperbolic systems of conservation laws. Several mathematical compactness theories and their role are described. Some recent and ongoing developments are reviewed and analyzed.
H. G. Kaper, M. Knaap, and M. K. Kwong, ``Existence Theorems for Second-Order Boundary Value Problems,'' MCS-P142-0390. This article is concerned with the existence of positive solutions of boundary value problems for nonlinear second-order differential equations that arise, for example, in the study of reaction -diffusion equations in annular regions.
P. T. P. Tang, ``Table-driven Implementation of the Expm1 Function in IEEE Floating-Point Arithmetic,'' MCS-P144-0390. Algorithms and implementation details for the Expm1 function in both single and double precision of IEEE 754 arithmetic are presented. With a table of moderate size, the implementations need only working-precision arithmetic and are provably accurate to within 0.58 ulp.
L. Wos, ``The Problem of Finding a Semantic Strategy for Focusing Inference Rules,'' MCS-P145-0390. The problem posed for research asks one to find semantic criteria for selecting the clauses needed to complete the application of an inference rule. Suggestions for test problems are included.
H. Zhang, ``Large-Time Behavior of the Maximal Solution of [the Singular Diffustion] Equation,'' MCS-P146-0490. The singular diffusion equation arises in many areas of application. This paper considers the large-time behavior of the solution of a more general equation. It is shown that the solution of the Cauchy problem converges to its corresponding similarity solution as t -> infinity. Moreover, an iteration process is used to obtain the optimal rate of convergence.
W. D. Gropp and D. E. Keyes, ``Parallel Performance of Domain-decomposed Preconditioned Krylov Methods for PDEs with Adaptive Refinement,'' MCS-P147-0490. Numerical experiments are presented on both shared- and distributed-memory computers for convection-diffusion problems at modest Peclet (or Reynolds) numbers, without recirculation. Because of the development of boundary layers, these problems benefit from local mesh refinement, which is straightforward to accommodate within the domain decomposition framework in a locally uniform sense, but which introduces load balancing as a further consideration in choosing the granularity of the preconditioner. In spite of the tradeoffs, cumulative speedups are obtainable out to at least medium-scale granularity (up to 64 processors in the tests reported).
C. H. Bischof, ``A Block QR Factorization Algorithm for Rank-Deficient Matrices,'' MCS-P148-0490. This paper presents a new algorithm for computing the QR factorization of a rank-deficient matrix that is well suited for high-performance machines. The traditional QR factorization algorithm with column pivoting is not well suited for such environments, since it precludes the use of matrix-matrix operations. The author suggests a restricted pivoting strategy based on incremental condition estimation which allows one to formulate a block QE factorization logarithm where the bulk of the work is in matrix-matrix operations. Performance results on the CRAY 2, CRAY X-MP, and CRAY Y-MP show that the new algorithm performance significantly better than the traditional scheme and can more than halve the cost of computing the QR factorization.
H. G. Kaper and M. K. Kwong, ``A Free Boundary Problem Arising in Plasma Physics,'' MCS-P149-0490. This paper is concerned with the existence and uniqueness of free boundary problems for certain differential equations.
C. Bischof, ``Fundamental Linear Algebra Computations on High-Performance Computers,'' MCS-P150-0490. This paper presents an overview of the LAPACK library, a portable, public-domain library to solve the most common linear algebra problems. This library provides a uniformly designed set of subroutines for solving systems of simultaneous linear equations, least-squares problems, and eigenvalue problems for dense and banded matrices. We elaborate on the design methodologies incorporated to make the LAPACK codes efficient on today's high-performance architectures. In particular, we discuss the use of block algorithms and the reliance on the Basic Linear Algebra Subprograms (BLAS). We present performance results that show the suitability of the LAPACK approach for vector uniprocessors and shared-memory multiprocessors. We also discuss some issues that have to be addressed in implementing such a library on other types of parallel architectures. These issues involve efficient implementations of the BLAS on parallel machines, the proper choice of blocking parameters, and the use of parallelism outside of the BLAS.
C. H. Bischof and P. G. Lacroute, ``An Adaptive Blocking Strategy for Matrix Factorizations,'' MCS-P151-0490. This paper presents an adaptive blocking methodology for determining in a systematic manner an optimal blocking strategy for a uniprocessor machine. The technique is demonstrated on a block QR factorization routine on a uniprocessor. After generating timing models for the high-level kernels of the algorithm, the author shows how one can formulate the optimal blocking strategy in a recurrence relation that can be solved inexpensively with a dynamic programming technique. Experiments of one processor of a CRAY-2 show that this strategy is as good as any fixed-width blocking strategy. Thus, while the optimum fixed-width blocking strategy cannot be determined unless the problem is run several times, adaptive blocking provides optimum performance in the very first run.
D. C. Sorensen and P. T. P. Tang, ``On the Orthogonality of Eigenvectors Computed by Divide-and-Conquer Techniques,'' MCS-P152-0490. A detailed analysis on the accuracy issues in calculating the eigensystems of rank-1 perturbed diagonal systems is presented. Such calculations are the core of the divide-and-conquer technique. Here the authors prove that the computed eigenvectors are guaranteed orthogonality provided the secular equation is evaluated in a precision that doubles the&127; working one. An efficient algorithm that simulates such ``doubled precision'' in working precision is also provided. Numerical results that confirm the analysis and implementation are presented.
G.-Q. Chen, ``The Theory of Compensated Compactness and the System of Isentropic Gas Dynamics,'' MCS-P154-0590. Properties of the system of isentropic gas dynamics, especially the behavior of entropy, are analyzed and documented. The Lax-Friedrichs approximations, the Godunov approximations, and the viscosity approximations to the Cauchy problem for the system are constructed; and a compactness framework for these approximations is established with the aid of the theory of compensated compactness. Convergence for the approximations is shown.
G.-Q. Chen, ``Global Solutions to the Compressible Navier-Stokes Equations for a Reacting Mixture,'' MCS-P155-0590. In this note the author establishes existence theorems for global generalized solutions to the compressible Navier-Stokes equations for a reacting mixture, which describe dynamic combustion. Equivalence of the Navier-Stokes equations in Euler coordinates and Lagrangian coordinates for the generalized solutions is verified. The asymptotic behavior obtained from impermeable thermally insulated boundaries and prescribed data boundaries is identified and proved.
M. K. Kwong and Yi Li, ``Uniqueness of Radial Solutions of Semilinear Elliptic Equations,'' MCS-P156-0590. E. Yanaida has proved that the classical Matukuma equation with a given exponent has only one finite mass solution. In this paper the authors show how similar ideas can be exploited to obtain uniqueness results for other classes of equations as well as Matukuma equations with more general coefficients. The key ingredients of the method are energy functions and suitable transformations. Also studied are general boundary conditions, using an extension of a recent result by Bandle and Kwong. Yanaida's proof does not extend to solutions of Matukuma's equation satisfying other boundary conditions.
Per Christian Hansen, ``Analysis of Discrete Ill-Posed Problems by Means of the L-Curve,'' MCS-P157-0690. When discrete ill-posed problems are analyzed and solved by various numerical regularization techniques, a convenient way to display information about the regularized solution is to plot the norm or seminorm of the solution vs. the norm of the residual vector. In particular, the graph associated with Tikhonov regularization plays a central role. This paper advocates the use of this graph in the numerical treatment of discrete ill-posed problems. The graph is characterized quantitatively, and important relations between regularized solutions and the graph are derived. The authors demonstrate that several methods for choosing the regularization parameter are related to locating a characteristic L-shaped ``corner'' of the graph.
M. T. Jones and M. L. Patrick, ``LANZ: Software for Solving the Large Sparse Symmetric Generalized Eigenproblem,'' MCS-P158-0690. This paper describes the package LANZ for solving the large sparse symmetric generalized eigenproblem. The package has been tested on four different architectures: Convex 200, CRAY Y-MP, Sun-3, and Sun-4. The package uses a version of Lanczos' method and is based on recent research into solving the generalized eigenproblem.
J. M. Boyle and T. J. Harmer, ``A Practical Functional Program for the CRAY X-MP,'' MCS-P159-0690. This paper demonstrates how one can have all the advantages of functional programming (correctness, clarity, simplicity, and flexibility) without any sacrifice in performance, even for a scientifically significant computation on a supercomputer. In the example chosen, pure Lisp is used to express the functional specification for a PDE solver. From this specification, an executable Fortran program is produced automatically by means of automated program transformations. The generated program vectorizes on the CRAY X-MP and runs about 13% faster than a hand-written Fortran program for the same problem.
R. Veroff and L. Wos, ``The Linked Inference Principle, I: The Formal Treatment,'' MCS-P160-0690. This paper presents a detailed, formal treatment of the linked inference principle. The principle is applied to obtain the abstract formulations of various linked inference rules. Included among such rules are linked UR-resolution, linked hyperresolution, and linked binary resolution, each of which generalized the corresponding standard and well-known inference rule. In addition to the formalism, the paper focuses on the motivation and objectives for the formulation of linked inference rules. Included are experimental results and numerous examples.
G.-Q. Chen and A. Rustichini, ``Global Solutions to a System of Conservation Laws and Nonzero Sum Dynamic Games,'' MCS-P161-0790. This paper uses techniques for solving solutions of systems of conservation law to solve a problem in game theory. At the same time, it provides an example of an interesting new application for the theory of systems of conservation laws. Specifically, the paper extends the idea of viscosity solutions to nonzero sum differential games. The extensions provide a sound way of defining the concept of value for a nonzero sum game. They also provide a criterion for selecting among the possible equilibria of the deterministic game.
S. J. Wright, ``An Interior-Point Algorithm for Linearly Constrained Optimization,'' MCS-P162-0790. The author describes an algorithm for optimization of a smooth function subject to general linear constraints. An algorithm of the gradient projection class will be used, with the important feature that the projection at each iteration is performed by using a primal-dual interior point method for convex quadratic programming. Convergence properties can be maintained even if the projection is done inexactly in a well-defined way. Higher-order derivative information on the manifold defined by the apparently active constraints can be used to increase the rate of local convergence.
I. Foster and R. Overbeek, ``Bilingual Parallel Programming,'' MCS-P163-0790. This paper presents an approach to parallel programming based on bilingual programming. The key is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation--expressiveness, elegance, and conciseness--to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.
I. Foster, C. Kesselman, and S. Taylor, ``Concurrency: Simple Concepts and Powerful Tools,'' MCS-P164-0790. This paper describes programming language concepts that have proved useful in applying stepwise refinement to parallel programs. The concepts allow decisions concerning program structure to be delayed until late in the design process. This capability permits rapid experimentation with alternative structures and leads to both portable and scalable code. The concepts form a sufficient basis for the construction of powerful programming tools. The concepts and the tools have been applied successfully in a wide variety of applications and are incorporated in a commercial concurrent programming system Strand.
D. G. Szyld and M. T. Jones, ``Two-Stage and Multi-Splitting Methods for the Parallel Solution of Linear Systems,'' MCS-P165-0790. A two-stage multi-splitting method for the parallel solution of linear systems is presented. The method reduces to each of the others in specific cases. Conditions for convergence are given. In the case of a multi-splitting method related to block Jacobi, the method is equivalent to a two-stage method with only one iteration per outer iteration. A fixed number p of iterations of this method is compared with a two-stage method with a p inner iterations. The asymptotic rate of convergence of the first method is faster, but--depending on the structure of the matrix and the parallel architecture--it takes more time to converge. Numerical experiments are presented for illustration.
G. M. Shroff and C. H. Bischof, ``Adaptive Condition Estimation for Rank-One Updates of QR Factorizations,'' MCS-P166-0790. In this paper the authors consider general (i.e., nonsymmetric) matrices undergoing rank-one changes and develop an adaptive condition estimation algorithm GRACE to monitor the condition number of the matrices during the update process. The algorithm requires only O(n) overhead beyond the cost of updating the QR factorization. Potential numerical difficulties of the algorithm are analyzed, and modifications are presented to overcome these difficulties. The paper includes experimental results that demonstrate that GRACE works well in practice.
C. H. Bischof, C.-T. Pan, and P. T. P. Tang, ``Stable Cholesky Up- and Downdating Algorithms for Systolic and SIMD Architectures,'' MCS-P167-0790. This paper presents a stable algorithm for maintaining Cholesky factors of symmetric positive definite matrices under arbitrary rank-one changes. The algorithm is well suited for implementation on systolic arrays and SIMD architectures. By exploiting the similarity between the downdating algorithm recently suggested by Pan and the updating algorithm suggested by Carlson, one can derive an algorithm where up- and downdates can be pipelined. Implementation results on the 1024-processor AMT DAP 510 emphasize the simplicity and practicality of the proposed scheme.
S. Winker, R. Overbeek, C. R. Woese, G. J. Olsen, and N. Pfluger, ``Structure Detection through Automated Covariance Search,'' MCS-P168-0890. This paper summarizes the authors' investigations into the computational detection of secondary and tertiary structure of ribosomal RNA. They have developed a new automated procedure that not only identifies potential secondary and tertiary structural interactions, but also provides the covariation evidence that supports the proposed bondings, and any counterevidence that can be detected in the known sequences. A small number of previously unknown higher-order structural features have been detected in individual RNA molecules (16S rRNA and 7S RNA) through the use of this automated procedure.
C. R. Woese, S. Winker, and R. R. Gutell, ``The Architecture of Ribosomal RNA: Constraints on the Composition of Tetra-loops,'' MCS-P169-0890. The four base loops that cap many double helical structures in ribosomal RNA (the so-called ``tetra-loops'') exhibit highly invariant to highly variable compositions depending upon their location in the molecule. However, in the vast majority of these cases the composition of a tetra-loop is independent of its location and conforms to one of three general motifs: GNRA, UNCG, and (more rarely) CUUG. For the most frequently varying of the 16S rRNA tetra-loops, that at position 83 (E. coli) numbering), the three sequences CUUG, UUCG, and GCAA account for almost all the examples encountered; and each of them has independently arisen at least a dozen times. The closing base pair of tetra-loop hairpins reflects the loop composition: tending to be C:G for UUCG loops and G:C for CUUG loops.
M. Garbey, H. G. Kaper, and M. K. Kwong, ``Symbolic Manipulation Software and the Study of Differential Equations,'' MCS-P170-0890. In this note, the authors describe some of their experiences with symbolic manipulation software for the analysis of differential equations. They give several examples from their work on semilinear diffusion equations and from a bifurcation analysis of a nonlinear problem in combustion.
A. W. Bojanczyk, M. Ewerbring, F. T. Luk, and P. Van Dooren, ``An Accurate Product SVD Algorithm,'' MCS-P171-0890. The authors propose a new algorithm for computing a singular value decomposition of a product of three matrices. The algorithm is numerically desirable in that all relevant residual elements are numerically small.
[MCS | Research | Resources | People | Collaboration | Software | Publications | Information]
Last updated on January 23, 2004
Disclaimer
Security/Privacy Notice
webmaster@mcs.anl.gov