next up previous contents
Next: EXTERNAL RECOGNITION Up: Summary of Activities Previous: Contents

TECHNICAL HIGHLIGHTS

The Mathematical and Computational Sciences Division (MCSD) provides technical leadership within NIST in modern analytical and computational methods for solving mathematical problems of interest to American industry. This mission is discharged through a program of advanced research in selected areas of applied and computational mathematics and collaboration with technical experts in other NIST divisions, industry, and academia. The scope includes the development and analysis of theoretical descriptions of phenomena (mathematical modeling); the design and analysis of requisite computational methods and experiments; the transformation of these methods into efficient numerical algorithms for high performance computers; the implementation of these methods in high-quality mathematical software; and the distribution of this software to potential clients, both within NIST and to the external community.

The work of the division can be grouped into three broad areas: testing and evaluation methodology, mathematical modeling, and tools for high performance computing. These activities are described in more detail below. First are some highlights from this year's work.

B. Alpert (MCSD) and colleagues M. Francis (EEEL) and R. Wittmann (EEEL) won a Department of Commerce Bronze Medal for their work in developing an algorithm for the processing of antenna measurements corrupted by probe position errors. The method exploits position information available during the measurement procedure to compute far fields as accurately as when no position errors are present, and at a computational cost which is acceptable even for electrically very large antennas. The interpretation of near-field antenna measurements, which requires transformation to the far field, is typically accomplished with the fast Fourier transform (FFT). When the measurement positions deviate from an ideal rectangular grid, however, the FFT is not applicable without modification. The new algorithm employs a combination of a recently-developed unequally spaced FFT, interpolation, and the conjugate gradient method to transform accurately to the far field at a cost proportional to N log N operations, where N is the number of measurements (typically between ten thousand and a million). The method will be used for measurements at higher frequencies and those taken on mobile platforms, where tight tolerances are difficult to maintain. The algorithm has been implemented and the software has been distributed to antenna calibration laboratories in government and industry.

Roldan Pozo used funding from his 1996 Presidential Early Career Award for Scientists and Engineers to develop JazzNet, a dedicated cluster of PCs for use in scientific computing applications. The goal was to build an inexpensive "personal supercomputer'' using off-the-shelf components, capable of achieving more than 1 Gflop (one billion floating-point operations per second) for under $30,000. Initial performance studies indicate that this milestone has been surpassed. The current system is built using five Intel Pentium (P6) computers, each of which can deliver up to 200 Mflops (200 million floating-point operations per second) and 600 MIPS (600 million instructions per second). The processors are connected using a dedicated 100BaseT Ethernet network capable of moving data at 5 megabytes per second (10 times that of conventional Ethernet). The system runs Linux 2.0 and has been outfitted with parallel programming libraries and packages, including PVM, XPVM, MPI, Posix Threads, and High Performance Fortran. A second phase of the project will integrate an experimental Gigabit Myrinet communication network to study performance effects of increased network bandwidth on computational science applications. We expect such systems to become increasing common at NIST and other scientific computing sites in the future. JazzNet was featured in a variety of external publications including Government Computer News, the Montgomery Business Gazette and New Technology Week, sparking wide interest, and many queries from industries such as Gillette Corp., Dean Witter-Reynolds and Sunshine Medical Electronics.

Overall the staff of the division has continued its high level of productivity and professional activity. Among other indicators, we produced 56 refereed publications, gave 72 talks of which 37 were invited, served on the editorial boards of five journals, served as Editor-in-Chief of one journal, participated on five review panels, refereed for numerous journals and funding agencies, and obtained one patent. Some of the details are provided elsewhere in this report.

Testing and Evaluation Methodology

We have made significant new efforts on the development of testing and evaluation methodology and infrastructural services for computational science. Included in this work is the design and analysis of testing methodology, evaluation criteria, reference data, and reference algorithms. Customers for this work include industrial developers of software products for scientific computing, as well as the computational science research communities in industry, government and academia.

One such service, the Matrix Market, provides online access to a large collection of test data for use in comparative studies of algorithms and software for numerical linear algebra. This work, done in collaboration with Boeing, features some 500 large sparse matrices from a variety of applications as well as matrix generation tools and services. In its second year of operation, this Matrix Market has already seen visitors from more than 7,500 distinct Internet hosts and has distributed more than 4 Gbytes of matrix data. In a related effort, we are working with the BLAS Technical Forum on the development of community standards for basic linear algebra software components. The original BLAS were quite successful in promoting portability and high performance of core linear algebra kernels. The Forum brings together researchers in government labs and academia with computer hardware and software vendors such as Cray, HP/Convex, NEC, Intel, Tera and NAG to work on the extension of the BLAS concept in light of modern software, language and hardware developments. As leader of the Sparse BLAS Subcommittee, we have developed a working proposal for the Sparse BLAS, as well as reference implementations in C and Fortran.

This year we inaugurated work on the NIST Digital Library of Mathematical Functions (DLMF), a joint project of ITL, the NIST Physics Lab and the NIST Standard Reference Data Program. The DLMF is envisioned as a modern replacement for the NBS Handbook of Mathematical Functions which was first issued in 1964. More than 150,000 copies of the Handbook have been sold by the Government Printing Office and several commercial publishers (it remains available today from these sources). The Handbook contains technical information, such as formulas, graphs and tables, on a variety of mathematical functions of widespread use in the sciences and engineering. The new DLMF would revise and expand this core data and would make use of advanced communications and computational resources to disseminate the information in ways not possible using static print media. Examples include formulas downloadable into symbolic systems, dynamic graphics, reference algorithms and software, tables generated on demand, and application modules tailored to specific domains such as quantum physics. Feedback from the computational science community on this project has been overwhelmingly favorable. In July 1997 we held an invitational workshop for well-known experts in special functions and their application to begin planning for the project. Work is underway on the development of reference algorithms for certain special functions, as well as on an Web-based testing service for mathematical functions to support the project. Substantial external funding is now being sought to permit the participation of outside technical experts in the many subfields of mathematical functions.

There is a large demand for micromagnetic modeling results, both for industrial design processes and for the materials science and physics of magnetism. Currently there are weaknesses in the physical and computational models and a public modeling code is unavailable. In collaboration with the NIST Materials Science and Engineering Laboratory (MSEL), we are developing standard test problems for micromagnetic modeling. We presented the first standard problem results this year. The problem is to simulate the magnetization structure of a 20nm thick, 1 by 2 micrometer rectangle of permalloy as it is cycled through a major hysteresis loop. We collected solutions from several researchers (submitted anonymously), and much to everyone's surprise, the results showed little agreement, underscoring the importance of such exercises. We are also developing a public code for micromagnetics, whose initial release will be in FY98.

Work has also been initiated in the development of metrics for use in the evaluation of image transformations such as compression. Initial studies are attempting to model images as parameterized surfaces plus error components. Such modeling seeks to combine earlier results on metrics with recent findings of MCSD staff and others on the contrast sensitivity function associated with human vision. Initial results show that a vast category of models can be obtained through linear filtering and that improved metrics can be obtained by using the norm of the residual vector.

Finally, we are cooperating with the Statistical Engineering Division on the development of reference data for the the evaluation of statistical software. The Statistical Reference Datasets service, which was formally unveiled this year, contains a collection of test problems for nonlinear leasts squares developed by MCSD staff.

Mathematical Modeling

Mathematical modeling is an interdisciplinary effort requiring close collaboration between scientists inside and outside the division. Our researchers cooperate with the outside scientists to develop specific mathematical models that capture the essence of the phenomena under study. They analyze the model, propose and develop numerical algorithms, and produce a computer program. The resulting program is run to provide simulations that are compared with experimental results to validate the entire process and to provide the basis for further refinements and enhancements. This process provides more cost-effective, quicker, and better information than experimentation alone. Indeed, such modeling and simulation activities are augmenting and, in many cases, replacing the need to do experiments or used to guide the experimental process into more fertile areas. The information allows the outside scientists to gain understanding or to predict behavior of a complex system, and thus forms the basis for techniques to improve the performance of the system under study.

The customers for our work include our collaborating ITL and NIST scientists and engineers, and through these collaborators, industrial scientists and engineers; other customers are the larger community of researchers in computational science and engineering. Our aim is to work on a spectrum of tasks, including engineering and advanced development, to conduct both short- and long-term research, and to accelerate the adoption of advanced modeling and simulation techniques.

Following are a few of our projects. (Our work in antenna modeling is discussed above. We are also active collaborators in the opto-electronics project with ITL's Information Access and User Interfaces Division.)

We are active in an inter-laboratory NIST competence project, Measurement Science for Optical Reflectance and Scattering. This project seeks to identify and measure the physical and optical characteristics of surfaces with desirable appearance characteristics. The results of this work could lead to a computer-based foundation for the virtual design of a surface and a tool for predicting its appearance. This past year we formed a consultative and collaborative relationship with researchers in the very rapidly expanding computer graphics industry. We organized a two day meeting attended by researchers in industry and academia for the purposes of discussing and critiquing the proposed ITL contribution; scientists from Silicon Graphics Inc. and IBM strongly supported the proposed NIST program, as did participants from Cornell University and the Massachusetts Institute of Technology, stating that this is precisely the kind of research needed to move the field of computer graphics rendering from a craft to an engineering discipline so that rendered surfaces have more of the visual properties of real surfaces. Our proposed construction of a database incorporating measurements of the optical and textural characteristics of key materials would go a long way towards improving the accuracy and fidelity of the models currently used in computer graphics.

Composite materials with complicated microstructures, such as ceramics, are important in many industrial applications. The microstructure is determined by the processing of the material, but it is the macroscopic properties that are relevant in applications. Previous analysis and computer simulations of the macroscopic properties have been based on idealized simplifications of the microstructure. We, in collaboration with MSEL, have constructed a finite-element computer model that has as input a digitized image of a real microstructure. This program, while still two-dimensional, models elasticity and fracture much more realistically than previous programs, and will help material scientists determine the properties of actual composite materials.

High-speed machining processes are becoming increasingly important in modern manufacturing, but such processes can lead to discontinuous chip formation that is strongly correlated with increased tool wear, degradation of the workpiece surface finish, and less accuracy in the machined part. In an ongoing collaboration with the Automated Production Technology Division in MEL, a new approach to modeling some high-speed machining processes is being developed that has the potential to predict the onset of discontinuities. We have treated some basic metal cutting operations as nonlinear dynamical systems that include a mechanism for thermomechanical feedback in the region where the tooltip and workpiece material are in contact. We have shown that, as the cutting speed is increased, a bifurcation from steady-state to oscillatory behavior occurs in computer simulations of the model, which is consistent with the change from continuous to segmented chip formation. To obtain an analytical criterion for the material and cutting conditions at which this bifurcation occurs, we have also developed a related but simpler lumped-parameters model, in which a Hopf bifurcation provides a dimensionless group of parameters directly proportional to the cutting speed that predicts the onset of discontinuous chip formation. Improvements in the models are in progress. One of the main objectives of this effort is to provide improved mathematical models for computer simulations of manufacturing processes which involve high-speed cutting of materials. This information can then be used to control and improve the machining processes.

The use of Monte Carlo simulations to explore phenomena continues to be an important topic of research. It was known that the number of different dimer coverings of a cubic lattice grows exponentially with the size of a lattice. We have calculated the exponent, which is a physical constant whose determination has resisted theoretical and computational efforts for over 40 years. We did this by conducting a very large scale Monte Carlo calculation. The method extends to the monomer-dimer case and should enable the first ever computation of the partition function for monomer-dimer systems. This result is of fundamental interest in chemistry and materials science because it explains how energy states are distributed. The core computation has been parallelized in a way that makes it practical even for very primitive bit-serial SIMD architectures, as well as more advanced machines. Thus the contribution is both an advance in the science and an advance in the computational procedure to carry it out.

Tools for High Performance Computing

MCSD consulting and collaboration activities often lead to the development of general purpose tools that can be reused in other NIST and external applications.

A primary tool for the dissemination of information about general-purpose mathematical software tools of use in computational science research is the NIST Guide to Available Mathematical Software. GAMS indexes more than 10,000 software components from 110 libraries and packages. These are components which have been either developed at NIST, selected for use at NIST, or archived at netlib, the premier external repository of the numerical analysis community. As such, GAMS serves the needs of local users for information about software available on local systems; however, the information is of wide interest and has been made available to the public. GAMS regularly sees more than 10,000 external users each month, and the Web server which hosts GAMS, the Matrix Market, and other MCSD project pages is now averaging more than 400,000 "hits'' per month, having exceeded 7 million hits since it started operation in 1994.

Our work in support of distributed memory parallel computing has lead to the development of PHAML, a parallel hierarchical adaptive multilevel software package for elliptic boundary-value problems. Work on PHAML has resulted in fundamental advances in multigrid methods and automated load balancing for adaptive computations. A public release of PHAML is expected this year. Work on PHAML led to the need for portable interactive graphics accessible in Fortran. To solve this problem, f90gl, a Fortran binding, along with a reference implementation, for the OpenGL graphics interface was developed and submitted to the OpenGL Architecture Review Board (ARB). The ARB is composed of representatives of a variety of manufacturers with OpenGL products, including Digital, Evans & Sutherland, Hewlett-Packard, IBM, Intergraph, Intel, Microsoft and Silicon Graphics. Favorable initial reviews were obtained from the ARB and the X3J3 Fortran standards committee, and f90gl is now under consideration as the official Fortran binding for OpenGL.

Our work in image analysis and metrics has also lead to a variety of new methods and tools. For example, we have studied progressive transmission techniques, with applications to downsampling/upsampling schemes, and are in the process of developing prototype software. We have developed software to generate optimal biorthogonal wavelets, as well as prototype software to apply wavelet transforms to color pictures. We have also generalized the lifting schemes used for ordinary biorthogonal wavelets to multiwavelets. (Lifting is the process of starting with a wavelet pair and generating a new one that satisfies some property such as vanishing moments.)



next up previous contents
Next: EXTERNAL RECOGNITION Up: Summary of Activities Previous: Contents



Thu Dec 11 16:16:45 EST 1997