Archive of earlier Uncertainty Quantification/Verification & Validation Seminar Series—Seminars 7–11

Previous Seminars

1–6, 12–17


Seminar 11

Title: Predictive Model Validation for System Risk Management

Failures of engineering systems (e.g., vehicle, aircraft) lead to significant reliability and maintenance costs ($200 billion each year in US industry) and human fatalities, such as Ford Explorer Rollover (1998-2000) and the explosion of the Challenger space shuttle (1986). One of the greatest challenges in engineering systems design is eliminating risk of product systems before the design is produced. This talk thus presents a predictive model validation to system risk management. Hierarchical model validation (HMV) is developed for validating predictive system computer models. To make the validation systematic and affordable, HMV is composed of two steps: (1) Top-Down: validation planning and (2) Bottom-Up: validation execution. HMV has been demonstrated using two practical engineering examples: a cellular phone and a tire. Provided a computer model for an engineering system is validated, the remaining challenge in system risk management is system reliability analysis. A complementary interaction method (CIM) is proposed to formulate system reliability explicitly. For its numerical solvers, two sensitivity-free methods are proposed for various engineering applications:
(1) eigenvector dimension reduction (EDR) and (2) DR-polynomial chaos expansion (DRPCE). It is found that the DR-PCE method is more desirable for highly nonlinear problems, otherwise EDR is preferable. Some engineering applications will be employed to demonstrate the feasibility of the proposed approaches to system risk management.

Speaker: Byeng Dong Youn, University of Maryland

Date/Time: Thursday, June 26, 2008, 10:00-11:00 (NM), 9:00-10:00 (CA)

Location: CSRI Room 90 (Sandia NM), Building 915, Room N153 (CA)

Predictive Model Validation for System Risk Management [pdf]

Seminar 10

Title: A Fresh Look at Mesh Refinement

A pressing issue for verification of computational physics and engineering codes is how to estimate discretization error effects when our calculations are not in the asymptotic region of convergence, which is often the case. The following talk will present some new possibilities on how to address this issue. The presenter, Francois Hemez of the Verification Methods Group at LANL, will also briefly mention some other areas of current research and tool development in his group. An open group discussion with Francois will occur from 1:30 – 2:30 in room 1811. The abstract of the talk is given below.

This presentation takes a fresh look at the concept of mesh refinement for spatial or time-varying curves simulated by a computational physics or engineering code. Mesh refinement is used to study the rate at which truncation error converges, where truncation is the difference between discrete solutions obtained by a code with a given level of mesh or grid refinement, Δx, and the (unknown) solution of the continuous partial differential equations.

The idea from which our contribution originates is that discrete solution curves computed by a code can be decomposed on a basis of independent empirical functions. Our contention is that these functions define specific “resolution scales” and that these scales converge at different rates as Δx --> 0. It may, therefore, make more sense to study the asymptotic convergence of these individual resolution scales. A technique based on principal component decomposition is developed to observe the resolution scales that contribute to discrete solutions. A theorem demonstrates that the asymptotic convergence of an entire curve is equivalent to the convergence of its decomposition. The theorem yields a bounded estimate of the rate-of-convergence for entire spatial or time-varying curves. These ideas are applied to simulations performed with a finite element code in the case of a Hertz contact problem where the exact solution is unknown.

Speaker: François Hemez, Los Alamos National Laboratory

Date/Time: Tuesday, May 20, 2008, 11:00-12:00 (NM), 10:00-11:00 (CA)

Location: JCEL, Building 899, Room 1811 (Sandia NM), Building 915, Room S145 (CA)

A Quick Look at Verification Capabilities in WT-2 [pdf]

A Fresh Look at Mesh Refinement in Computational Physics and Engineering, LA-UR-08-1731 [pdf]

Seminar 9

Title: Verification of the Calore Thermal Analysis Code

Calore is the ASC code developed to model steady and transient thermal diffusion with chemistry and dynamic enclosure radiation. An integral part of the software development process is code verification, which addresses the question “Are we correctly solving the model equations?” This process aids the developers in that it identifies potential software bugs and gives the thermal analyst confidence that a properly prepared input will produce satisfactory output. Grid refinement studies have been performed on problems for which we have analytical solutions. In this talk, the code verification process is overviewed and recent results are presented. Recent verification studies have focused on transient nonlinear heat conduction and verifying algorithms associated with (tied) contact and adaptive mesh refinement. In addition, an approach to measure the coverage of the verification test suite relative to intended code applications is discussed.

Speaker: Kevin Dowding, Dept. 1544

Date/Time: Thursday, April 24, 2:00-3:00 (NM), 1:00-2:00 (CA)

Location: CSRI Room 90 (Sandia NM), Building 915, Room S107 (CA)

Note: The Computer Science Research Institute (CSRI) is located in the Sandia Research and Technology Park, at 1450 Innovation Pkwy. For more information visit: http://www.cs.sandia.gov/CSRI

Verification of the Sierra-Thermal Analysis Capability, SAND2008-2720P [pdf]

Seminar 8

Title: Kriging: The Cadillac of Nonlinear Response-Surface Methodologies

Sandia National Laboratories has recently licensed from General Motors a proprietary software package, called the “Kriging Wizard,” that can create experimental designs for computer experiments, fit Kriging response surfaces to the resulting data, and facilitate design exploration with these response surfaces. Given that Kriging is only one of many methods for fitting an approximation to input-output data, it is quite natural to ask what makes Kriging special and why anyone should bother to learn about this new method and the associated software. Answering this question will be the focus of my talk.

As we will see, the most distinctive feature of Kriging is that it has a statistical foundation that allows one to derive meaningful confidence intervals around predicted values. As I will explain, one can not only put confidence intervals around predictions at a single point, but one can also put confidence intervals around quantities computed over many points, such as a “probability of failure” estimated using a Monte Carlo study on the surface.

Another advantage of Kriging is that it has properties that facilitate performing functional analysis of variance (“functional ANOVA”). Like Monte Carlo analysis, functional ANOVA can assume distributions for inputs that are “noise factors” and determine how this variation propagates to variation in the output. Unlike Monte Carlo, however, functional ANOVA can also determine how much of this output variation is due to different noise factors, thereby allowing one to determine which noise factor is most important. If the inputs are control parameters that are assumed to be uniform between a lower and upper design limit, then functional ANOVA can identify which control variable is most important to focus on in order to improve the design.

Speaker: Don Jones, General Motors Technical Fellow

Date/Time: Monday, March 17, 2008, 9:30-10:30 (NM), 8:30-9:30 (CA)

Location: 823 Breezeway (Sandia NM), Building 915, Room S107 (CA)

Seminar 7

Title: Bootstrap methods for sensitivity analysis of computer models

The understanding of many physical and engineering phenomena of interest involves running complex computational models (computer codes). With any problem of this type it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model.

In this presentation we suggest an improvement on existing methods for SA of complex computer models when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the input. The existing approaches to this problem either do not work well with a large number of input variables or do not satisfactorily deal with estimation error. Here we propose a new approach to variance index estimation which appears to incorporate satisfactory solutions to these drawbacks. The approach uses stepwise regression as well as boostrap methods to generate confidence intervals on the sensitivity indices. Several nonparametric regression procedures such as locally weighted polynomial regression (LOESS), additive models (GAM’s), projection pursuit, and recursive partitioning are considered as well as metamodels such as multivariate adaptive regression splines (MARS), random forests, and the gradient boosting method. An approach for calculating statistical properties of the bootstrap estimator will also be discussed. Several examples will illustrate the utility of this approach in practice.

Speaker: Curtis Storlie, UNM Dept. of Mathematics and Statistics

Date/Time: Thursday, March 6, 2008, 2:00-3:00 (NM), 1:00-2:00 (CA)

Location: Computer Science Research Institute, room 90 (Sandia NM), Building 915, Room 107 (CA)

Note: The Computer Science Research Institute (CSRI) is located in the Sandia Research and Technology Park, at 1450 Innovation Pkwy. For more information visit: http://www.cs.sandia.gov/CSRI

Previous Seminars

1–6, 12–17