CCIM LogoSandia National Laboratories Logo
HomeCapabilitiesOrganizationAwardsPublications and PresentationsCareer OpportunitiesCSRIPlatforms


2007 Newsnotes

Previous Newsnotes


Sandia/EPA TEVA-SPOT Water Security Team Chosen as Finalist for 2008 Edelman Award Competition

A team of researchers from Sandia National Laboratories and the Environmental Protection Agency (EPA), along with researchers from additional organizations, were recently named as one of 5 finalists for the 2008 Franz Edelman Award for Achievement in Operations Research and the Management Sciences.  This award, given out annually by the Institute for Operations Research and the Management Sciences (INFORMS) is one of the most prestigious awards in the field of Operations Research and recent winners include General Motors, IBM and AT&T.  The winner will be announced at the INFORMS 2008 Meeting on Practice in Baltimore this spring.

This team came together as part of the EPA’s Threat Ensemble Vulnerability Assessment (TEVA) program in order to work on the problem of developing contamination warning systems to counter threats against these systems.  The outcome of this effort is the Sensor Placement Optimization Tool (TEVA-SPOT), which uses sensor placement algorithms to design contaminant warning systems for water distribution systems.  This new tool has already revolutionized the field of sensor placement for these systems, enabling the analysis of very large water networks that were previously not amenable to sensor placement analysis.  TEVA-SPOT has been used by the EPA’s Water Security Initiative to design sensor placements for networks with over 40,000 junctions, which are being used to deploy contaminant warning systems in several large U.S. water utilities.    

TEVA-SPOT Water Security Team
Environmental Protection Agency:
Regan Murray, Robert Janke
Sandia National Laboratories:
William E.Hart, Jonathan Berry,
Erik Boman, Robert D. Carr,
Lee Ann Riesen, Cynthia A. Phillips, Jean-Paul Watson
Argonne National Laboratory:
Thomas N. Taxon
University of Cincinnati: James Uber
American Water Works Association:
Kevin Morley

 

Figure 1 The figure shows sensor locations (dots)
 recommended by TEVA SPOT to be added to a
 water utility distribution system (lines).

 

(Contact: William Hart)
December 2007


Selection of Journal Article for Special Section of SIAM Review

A paper co-authored by Louis Romero (1414) and titled "Tippe Top Inversion as a Disipation-induced Instability", has been chosen as the next "SIGEST" selection from the SIAM Journal on Applied Dynamical Systems (SIADS).  This paper previously appeared in SIADS 3(3) in 2004, and will now appear as a special selection in issue 50-2 of SIAM Review in June 2008. The purpose of SIGEST is to make the 10,000+ readers of SIAM Review aware of exceptional papers published in SIAM's specialized journals. In each issue of SIAM Review, the SIGEST section contains an outstanding paper of general interest that has previously appeared in one of SIAM's specialized research journals. Romero's paper was selected by the SIREV editors for the importance of its contributions and topic, its clear writing style, and its broad interest for the SIAM community.

The paper examines the complex dynamics of a tippe top, which is a seemingly simple top that inverts while spinning from resting on its base to resting on the end of its stem, in the process moving its center of mass upward. The analysis presented has implications for spin-stabilized systems subject to dissipative mechanisms. In the attached sequence of figures, the dissipation-driven dynamic behavior of a tippe top is simulated, with the projected paths showing the motions of the stem (blue) end and center of mass of the top (green). The path of the stem end clearly shows the top inversion as a transient phenomenon leading to a new spin-stabilized configuration.

(Contact: Kenneth Alvin )
December 2007


CUBIT 11.0 introduces dramatic new paradigm for improving efficiency of model preparation for simulation

The Immersive Topology Environment for Meshing (ITEM), introduced as part of CUBIT 11.0, is a new software tool that simplifies and guides the user through the process of building and preparing a CAD model for computational simulation.  In many cases, the geometry preparation phase for modeling and simulation can take a tremendous amount of human time to diagnose and build an appropriate analysis model.  Expert analysts spend years developing techniques and accumulating tribal knowledge to build models that will perform adequately in simulation.   ITEM is aimed at an intermittent user of modeling and simulation, by integrating model building expertise within a smart system that will diagnose potential problems and visually preview specific solutions that can be easily executed.  This technology has the potential to dramatically increase the productivity of engineering analysts by decreasing the current time taken to develop analysis models and improve the quality of the models they produce.   This can ultimately lead to a more effective design process by increasing the number and quality of design models that can be generated.

Built on Sandia’s CUBIT Geometry and Meshing Toolkit, it is a new wizard-like environment that a user will enter once they have a CAD model representation of their design.  The user will step through the process as the application diagnoses potential problems with the CAD model and previews specific solutions to be resolved.  Many of the solutions developed in ITEM are exclusive to this tool and represent a significant advance is pre-processing technology for analysis. For example, the ability to automatically rebuild the local topology of a solid model to simplify the model for analysis, the ability to adjust local topology to correct for misalignments in assemblies as well as the ability to present alternatives for geometry decomposition to admit a hexahedral mesh topology are all significant advances in this field.  In addition, the ITEM environment, which is the delivery mechanism for this new technology, provides a common methodology for diagnosing and presenting specific solutions to the user is unique in the industry and key to the competitiveness of this new tool. 

The focus on improving the efficiency of hexahedral meshing technology is also a major differentiator from existing tools.  While most pre-processing tools focus on the tetrahedral meshing problem, the majority of analysts would prefer hexahedral meshes based on their desirable properties in simulation. In many cases they cannot afford the time and overhead required to generate a hex mesh using existing technology.  ITEM makes the hexahedral meshing problem far more accessible to many analysts, providing a tool that can effectively improve the quality of their analysis models without the cost of traditional hexahedral meshing.   

ITEM was released in November 2007 as part of CUBIT 11.0.  It has already undergone extensive usability testing by analysts at Sandia National Laboratories with tremendous positive response.  Improvements to the system are already planned and underway based on feedback from the engineering analysts that have used it.  The process introduced with ITEM is a promising new paradigm on which new improvements in geometry and meshing technology will be based for the forseeable future.

Graphical User Interface of the Immersive Topology Environment for Meshing showing an example model being prepared for simulation.

(Contact: Steve Owen)
November 2007


Multiphysics Coupling for the Burner Reactor Integrated Safety Code

Sandia increasingly faces complex modeling problems that require several application codes to be coupled together.  For example, a current multi-center LDRD (including principle investigator Rod Schmidt and Bill Spotz, Russell Hooper, and Rich Pryor from 1400,  Alfred Lorber from 1500,  and Larry Humphries from 6700) is laying the groundwork for efficiently modeling the many coupled physical processes important to the normal and off-normal operation of a “burner” or “recycling” nuclear reactor.  Such a reactor is central to the Global Nuclear Energy Partnership program because it can accept as fuel reprocessed nuclear waste from a commercial light water reactor, and produce waste that can be further reprocessed into fuel that once again can be used in a commercial light water reactor.  If licensed and built, such reactors would enable a nuclear fuel cycle that produces only 2% of the nuclear waste currently generated for long term storage.  To accurately simulate the unique characteristics of this reactor type, a range of diverse but intimately coupled physical processes need to be modeled.  No single code presently exists which adequately models all of these processes in such a way that leverages advanced computational technologies.  However, the set of all scientific application codes at Sandia more than spans the requirements for such a code.  The challenge is to effectively couple the needed codes together, even though they were not designed to be used this way.


Conceptual Design of a Burner Reactor

Traditional linear and nonlinear solver packages are used by most scientific codes as low-level tools, called when needed.  To accommodate the tight coupling of multiple independent codes, the solver must be moved to a higher conceptual level, one where “physics modules” are plugged into it as lower level components.  The Jacobian-free Newton-Krylov algorithm permits this paradigm shift, because the information it needs from the physics codes is largely reduced to computing a residual vector given a current guess for the state vector.  If the physics codes to be coupled can support this operation (which may require the addition of an interface), then the top-level solver can iterate (hopefully) to a solution.  For the Burner Reactor Integrated Safety Code, we are implementing this top-level solver in python, a very high-level scripting language.  We made this design decision for several reasons: (1) python is excellent for the bookkeeping necessary to manage several independent physics modules; (2) python is also excellent at “gluing” together Fortran, C and C++ codes; and (3) for the numerical heavy lifting, we can leverage PyTrilinos, the python interface to Sandia's Trilinos Project, a collection of advanced solver packages.  To date, we have demonstrated this approach for simple physics modules, and have also produced a python interface to the Sierra project's Aria code.  We anticipate a working prototype by the end of calendar year 2007.  This approach is also being considered for other multiphysics coupling applications, such as the Next Generation Performance Assessment tool for Yucca Mountain.

(Contact: Rod Schmidt)
November 2007

New Algorithms for Lagrangian Shock Hydrodynamics

Several key issues with current Lagrangian shock hydrodynamics discretizations have been diagnosed, including numerical oscillations near strong expansions due to lack of kinetic energy convexity, poor shock-capturing on radial meshes, and insufficient hourglass control.  In each instance, a team led by Guglielmo Scovazzi (1431), Ed Love (1431), and John Shadid (1437) has developed and begun preliminary testing of new algorithms that overcome these challenges.  A new time integrator algorithm has been developed that exactly conserves key physical quantities. With this new scheme, kinetic energy remains a convex function of velocity, yielding oscillation-free solutions near strong expansions.  A new shock-capturing operator has been developed that shows significant improvement on radial meshes.  Finally, a new multi-scale approach to hourglass control has been designed and implemented that shows promise for improving robustness and fundamental understanding over current methods.  A SAND report and journal article are in preparation to document these findings and these new algorithms are being investigated for implementation in ALEGRA. 

Figure 1:  A new tensor artificial viscosity formulation eliminates spurious jets from forming in the two-dimensional Noh implosion test.

(Contact: Guglielmo Scovazzi)
October 2007


Sandia CSRI Workshop on Mathematical Methods for Verification and Validation

Figure 1: Left to right: Pavel Bochev (Sandia National Laboratories), Max Gunzburger (Florida State University), Qiang Du (Penn State University), Clayton Webster (Sandia National Laboratories), John Burkardt (Virginia Tech University), Yanzhao Cao (Florida A&M University)


The Computer Science Research Institute (CSRI) of Sandia National Laboratories held a very successful workshop
on Mathematical Methods for Veri cation and Validation (V&V), from August 14 - 16, 2007 at the Hyatt Regency Tamaya Resort. The workshop was chaired by Clayton Webster, the FY08 John von Neumann Fellow, and co-organized by fellow Sandians Scott Collis, Tim Trucano and David Womble, as well as Prof. Max Gunzburger from the School of omputational Science at Florida State University. The conference website can be found at http://www.cs.sandia.gov/CSRI/Workshops/2007/MMVV/.

It is well recognized in the both the academic and laboratory communities that Veri cation and Validation (V&V) must be an essential component of our research e orts if we expect to amplify our future predictive capabilities and therefore, the focus of this meeting was to emphasize the technical content that is required for successful and consequential V&V. To accomplish this goal we gathered a diverse group of 25 university and 30 NNSA laboratory researchers to consider foundational mathematical, statistical, and computational methods for V&V in complex application areas for predictive computational science. Talks surveyed important themes, including: Stochastic Sampling, Stochastic Di erential Equations (SDEs), Uncertainty Characterization, Error Estimation, Design of Computer Experiments and Reliability of Computational Science, as well as open research problems in the topics discussed. In addition to talks discussing methods for V&V, we also organized talks aimed at some broad areas of applications where V&V methods have had and will have a signi cant impact. These topics included Astrophysics, Hydrology, Chemical reactions and the Global Nuclear Energy Partnership (GNEP). In total there were 10 invited one-hour lectures and 12 short \rapid re" talks dispersed A copy of the program and a downloaded version of each presentation can be found at http://www.cs.sandia.gov/CSRI/Workshops/2007/MMVV/program.html.

A key feature of the workshop was the structured twice-daily discussion sessions. These sessions facilitated interactions among methods experts and applications scientists and proved very bene cial to all those attending. Future directions that methods development should take to be e ective for V&V were debated. A consensus emerged that several developments in Uncertainty Quanti cation have proved to be very promising and can be speci cially bene cial to V&V. The workshop also held a discussion session on the training of scientists in V&V. Serious training is an important issue, both for those who want to specialize in the development of speci c V&V methods, and also for the general computational science community that to understand and incorporate V&V methodologies in their work. This session addressed issues such as what courses students need to take to be V&V savvy, the creation of new courses to train students in V&V, and practical mechanisms to give students experience in V&V in realistic settings.

An overview of the presentations and a summary of the discussion sessions will be written in a forthcoming workshop summary White Paper by Max Gunzburger, Tim Trucano and Clayton Webster.

(Contact: Clayton Webster)
September 2007


Force Flux and the Peridynamic Stress Tensor

Sandia National Laboratories technical report Force Flux and the Peridynamic Stress Tensor by Sandians Rich Lehoucq and Stewart Silling has been accepted for publication in the Journal of the Mechanics and Physics of Solids. The peridynamic model is a framework for continuum mechanics based on the idea that pairs of particles exert forces on each other across a finite distance. Peridynamics has been shown to be an invaluable method for simulating structural materials subjected to large strains up to and including failure.  For example, the graphic below depicts damage in a reinforced concrete panel due to impact by a rigid cylinder as computed by EMU.

The equation of motion in the peridynamic model is the integro-differential equation

that was introduced by Silling in 2000. A peridynamic stress tensor is introduced so that the divergence of this stress tensor is the nonlocal integral representing internal force interactions, or

At any point in the body, this stress tensor is obtained from the forces within peridynamic bonds that geometrically go through the point as shown below.


The interest is that the peridynamic equation of motion can be expressed in terms of this stress tensor, and the result is formally identical to the Cauchy equation of motion. This equivalence is valuable in establishing a relationship with the classical model of continuum mechanics. We also establish a variational characterization of this stress tensor field and so show uniqueness in the function space compatible with finite element approximations. A force flux, or peridynamic traction vector, can also be defined so that peridynamics can be coupled to classical continuum mechanics discretized by the finite element method.  Coupling peridynamics to finite element method will allow for dramatically more efficient simulations where peridynamics is used only in regions near failure while the less costly finite element methods are used elsewhere.

(Contact: Rich Lehoucq and Stewart Silling )
September 2007


Sandia National Laboratories Developing Collaboration with Micron Technology To Address The Memory Bottleneck

The Scalable Computer Architectures Department (1422) has taken the first steps to establish the X-Mem consortium with Micron Technology, Inc. (the last U.S. memory manufacturer), Louisiana State University Professor Thomas Sterling, and Bob Lucas, Mary Hall, and Jeff Draper from University of Southern California’s Information Sciences Institute.  Memory is the dominant performance problem facing computer architects.This has been known as the
von Neumann Bottleneck since the mid-1940’s.  Work by Arun Rodrigues and Richard Murphy (1422) has demonstrated that the problem is even worse for supercomputer-scale applications.  The transition to multi-core microprocessors only exacerbates the problem as there are more processing devices making memory requests over an increasingly taxed interface.  As shown in Figure 1, many of Sandia’s scientific high performance computing (HPC) applications are limited by the performance of the current memory sub-systems.

The consortium is focused on the creation of technologies that can address this problem for the high performance computing community and other high-end systems.  Any large sparse problem can benefit from these advances.  Thus databases, high-end technical computing, data mining, and other applications with larger market segments can benefit.  The strategy is to focus on creating a commercially viable high-end memory system that supports HPC and these other data-intensive applications.  The current focus is on two memory architectures: an enhanced Fully Buffered Dual In-line Memory Module (FB-DIMM)-based Buffer-on-Board (BOB), the X-BOB, that integrates scatter/gather and other data movement operations into the memory system; and a 3D stacked Dynamic Random Access Memory (DRAM) architecture that moves memory significantly closer to processing resources to enable Processing-In-Memory (PIM) based architectures, known as X-Caliber. These architectures represent a fundamental shift from processor-centric architectures to systems built “from the memory out.”

The X-BOB is enabled by industry’s transition from bus-based memory architectures to the FB-DIMM high-speed signaling approach.  This transition is dictated by economic and performance requirements.  As illustrated in Figure 2, it introduces an Application-Specific Integrated Circuit (ASIC), the X-BOB to perform high-speed signaling between the processors and the memory system.  That signaling device represents the first time a logic chip has been put into the memory that has significant blank space to allow other operations.  Consequently, the proposed changes are nearly free.

The X-Caliber architecture represents the beginning of a research program to enable the commercially viable 3D stacking of memory on logic parts.  Currently, vendors are stacking memory on itself for density, or logic on logic for performance.  Other memory-on-logic experiments stack existing memory parts.  However, by creating a new memory part explicitly for stacking the consortium believes that higher performance and higher density can be achieved.  The end-goal is a memory interface that any processor, Field Programmable Gate Array (FPGA), or custom logic manufacturer can use.  This is an implementation path discovered by working with Micron as part of Sandia’s PIM LDRD project (105809). Both of these capabilities distinguish themselves from other efforts because they are being developed in collaboration with a memory manufacturer with a focus towards commercial implementation.

Figure 1:  Many of Sandia’s Applications spend most of their instructions accessing memory or doing integer computations, not floating point. Additionally, most integer computations are computing memory addresses. The X-MEM work focuses on accelerating memory performance.

Figure 2: The X-BOB provides data marshaling close to the memory, reducing the number of round-trip communications, and improving latency.

 (Contact: Richard Murphy and Arun Rodrigues )
August 2007


Improving data mining with tensor decompositions

For many years, data mining has benefited from linear algebraic techniques and matrix decompositions, such as the singular value decomposition (SVD).  These techniques have proven successful in challenging problems, from web search to social network analysis.  Recently, research at Sandia with funding from the LDRD program has taken data mining to another dimension, quite literally. Sandia researchers Brett Bader (1416) and Tamara Kolda (8962) have moved beyond matrices and have formulated data mining problems using multidimensional arrays, which are analyzed subsequently with tensor decompositions. By incorporating meta-data in the problem as an extra dimension, these novel techniques extract latent information and subtle relationships in the data that are missed by matrix techniques.  They have been applied to a wide range of problems related to national security, including social network analysis, web link analysis, cross-language information retrieval, and email surveillance. 

For instance, in joint research with Prof. Michael Berry of the University of Tennessee, a new approach was developed for automatic conversation detection in email over time where a term-by-author-by-day array encodes the email collection (Figure 1).  Also, in collaboration with Peter Chew (6341), the concept of a massively parallel Rosetta stone for cross-language information retrieval and multilingual clustering was implemented with better effectiveness using tensor decompositions on an array of term-by-sentence-by-language data of a multilingual parallel corpus.  To facilitate research with tensors, the Tensor Toolbox for MATLAB (http://csmr.ca.sandia.gov/~tgkolda/TensorToolbox/) was created and now has more than 700 registered users from all over the world.  An ongoing effort is to achieve scalable solutions for larger data mining problems using Sandia’s expertise in high performance computing.

Figure 1: Tensor analysis of 53,733 emails released from the federal investigation of Enron extracts conversations (topic, participants, and temporal profile), ranging from routine business topics to a weekly betting pool involving two dozen employees.

 (Contact: Brett Bader)
July 2007


LAMMPS Molecular Dynamics Simulations Used to Explain How Shock Waves Cause Structural Phase Transformation in Crystalline Solids

Aidan Thompson (1435) has used large scale molecular dynamics (MD) simulation in a preliminary study of shock waves propagating through single crystal cadmium selenide. The high pressure generated by the shock wave causes the crystal to transform from a hexagonal wurtzite structure to a cubic rocksalt structure.  In a previous experimental study by Marcus Knudson (1646) the transformation kinetics of cadmium sulfide were observed to be strongly dependent on the direction of the shock wave relative to the crystal orientation and also on the shock strength.  Cadmium sulfide and cadmium selenide are closely related materials with identical crystal structures.  The MD simulations have shown in detail how the atoms in these materials rearrange in different ways depending on the orientation and strength of the shock wave.

The simulations were performed by impacting a long narrow block of crystal into a perfectly rigid wall at speeds ranging from 0.1 to 1 km/s.  The collision causes a supersonic shock wave to propagate back through the crystal from the impact surface. The longitudinal pressure that develops in the shocked crystal increases quadratically with impact velocity.  The minimum pressure required to transform the wurtzite into rocksalt was found to be higher along the a-axis of the wurtzite crystal than the c-axis. Below this transformation pressure, the wurtzite remains in an elastically compressed state for the duration of the simulation.

Above the transformation pressure, the pathway to the rocksalt phase was different for the two different shock directions.  For shocks along the c-direction, the rocksalt phase formed directly from the elastically compressed wurtzite, with rocksalt planes forming parallel to the original wurtzite planes. This behavior is similar to what has been observed experimentally under static loading conditions. By contrast, in the a-direction, the crystal transformed first to a face-centered tetragonal (FCT) structure, which then collapsed to form regions of rocksalt that were not aligned with the wurtzite planes.  This latter situation is illustrated below, which shows a closeup snapshot of a 16 GPa shock moving from left to right along the a-axis of the wurtzite phase.  The atoms are colored by their local crystal structure: elastically compressed wurtzite (gray), tetragonal (red) and rocksalt (blue).  The tetragonal phase remains interspersed with rocksalt in the transformed material.

These simulations provide a unique insight into the detailed atomic mechanisms by which phase transformations occur in materials under high strain rate uniaxial loading conditions.  This study, supported by Campaign 2, was undertaken as part of efforts led by Clint Hall (1646) to establish at SNL a capability to treat the full science of dynamic material response: experiments, theory, and modeling and simulation.  The preliminary results from Aidan’s work, performed in collaboration with Marcus Knudson, were recently presented at JOWOG32 Materials, held at LANL June 18-22, and at the APS topical meeting on Shockwave Compression in Condensed Matter, held on the Kohala Coast of Hawaii June 25-29.

 (Contact: Aidan Thompson)
July 2007


Visualization Sandbox Project is recognized as outstanding New Mexico small business technical assistance initiative activity

The Visualization Sandbox Project is one of 293 projects sponsored by Sandia National Laboratories and Los Alamos National Laboatories under their New Mexico Small Business Assistance Initiative. On March 8 the NMSBA hosted an Outstanding Innovation event with demonstrations in the New Mexico Roundhouse and presentations at La Fonda Hotel in Santa Fe. The Visualization Project was one of eight projects they selected to recognize as outstanding at this event.

The Visualization Sandbox group comprises five New Mexico small businesses that combine domains of subject matter expertise ranging from simulation, visualization, and modeling to applied complexity, business management, and new product development. The group's common goal is to develop tools and capabilities to enable decision makers to reduce risk and increase agility in the face of complex business and environmental situations. This NMSBA project brought together Sandia and the small businesses to brainstorm. The result was a plan to develop computer simulations, visualizations and human computer interfaces not seen before in an integrated experience. Sandia contributed deep experience in designing visualization and machine vision applications to this open and creative collabration. As part of this NMSBA assistance project, Sandia supplied an Augmented Reality Table instrument, equipment designed and constructed at Sandia. Sandia moved the table to the University of New Mexico ARTS Lab, where the Visualization Sandbox group experimented with group interaction using digital imaging on the table surface.

The brainstorming and experimenting have focused the NMSBA project on an idea that builds on the dynamic-but-flat table display: the group is now experimenting with using sandtables for running scenarios, using the surface of the sand as a tangible interface to the computer modeling technology that runs the scenarios. In an application by Redfish Group people around the sandtable use their hands to sculpt the sand surface to approximate the topography surrounding Los Alamos. During the sculpting, graphics generated from a computer's digital representation of Los Alamos are projected onto the white sand provide colorful, three-dimensional, effective guidance. With sculpting complete, placing a physical marker on the sand surface defines a point of initiation. Once initiation and other parameters define a scenario, a computer uses images of the sandtable from an inexpensive camera to quantify the scenario parameters. The computer then executes a Redfish numerical simulation to predict how the wildfire will advance. Projecting computer graphics onto the white sand surface provides a fully three-dimensional display of the advancing fire.

Stephen Guerin, Redfish Group, says, "We see many potential applications in fire planning, emergency response, education, and military strategy and tactics. As small companies, we see this as an invaluable opportunity to gain national attention in these spaces. And, we think this project released back into the public domain as an open-source effort will further bolster the reputation of New Mexico as being on the frontier of simulation science and computer visalization." Carl Diegert, Sandia's principal investigator says, "A human tactile-visual relationship with active engagement of the sand surface, together with the startling fidelity and power of imaging projected onto the changing sand surface, enables an unprecedented means for group to explore, for example, how a wildfire responds to topography and wind. Imagine a fire marshal sharing his experience by working with a student to shape the sand surface, setting up a scenario on-the-fly in response to a question about what would happen if the valley topography was more like this. Predictions by existing agent-based models projected onto the sand surface become better understood, and therefore more valuable. Deficiencies of extant models become clear, and can form a basis for new models with expanded predictive capabilities." This fiscal year is the first in a planned multiple-year assistance activity.


Sandbox Visualization Project receives recognition award at 2007 Outstanding Innovation event.
Left-to-right: Luciano Oviedo, ParaDigmension; Rich Marquez, LANL; Carl Diegert, SNL PI; Joshua Thorp,
Redfish Group; Jerome Mattingly, NM Angels; Leonard Martinez, SNL; Stephen Guerin, Redfish Group;
Victor Chavez, SNL.

(Contact: Carl Diegert)
July 2007


A Simulation and Design Capability for Singlet-Oxygen Generators for Chemical Oxygen Iodine Lasers

The singlet oxygen generator (SOG) is a low-pressure, multiphase flow chemical reactor that is used to produce molecular oxygen in an electronically excited state, i.e. singlet delta oxygen.  The SOG is the initial stage in the chemical oxygen iodine laser (COIL), which has important application for both military purposes—it was initially developed by the US Air Force in the 1970s—and, as the infrared beam is readily absorbed by metals, for industrial cutting and drilling.

The SOG is a complex multiphase reacting-flow system, with a gas mixture of chlorine and helium reacting with dispersed droplets of an aqueous solution of hydrogen peroxide and potassium hydroxide (Figure 1). The primary product of the reactor, the energetic oxygen, is subsequently used in the COIL to dissociate and energize iodine, which in turn is accelerated to a supersonic speed and lased. Researchers at the Air Force Research Laboratory approached Sandia researchers to develop a computational simulation and design capability for the SOG based on our expertise in developing advanced algorithms for reacting flow simulation on HPC platforms.

A multidisciplinary team of Sandians from organizations 1437, 1416, 1411 and 1514 are working under a WFO-MIPR contract with the AFRL to create a multiphase reacting flow model within the PDE-solution research tool Charon to predict flow and conversion and attending efficiencies, utilizations and yields in various configurations of the SOG reactor.  The multiphase model that was chosen is the so-called Eulerian-Eulerian form of the Navier-Stokes equations wherein one set of the equations represents the gas phase and another equation “m” sets of equations represents the liquid phase, for “m” representative droplet sizes. This formulation can lead to over 50 coupled PDEs solved over complex geometries, and thus we are developing algorithms to harness the power of large parallel computing architectures to solve the steady-state and transient forms of these equations.

(Contact: Larry Musson )
June 2007


Sandia's ThreatView™ Advances State-of-the-art in Scalable
Information Analysis

Figure 1: Screenshot of the ThreatView™ application, showing landscape analysis of a large dataset consisting of over 41,000 documents. The large landscape view shows clustering of these documents based on links between authors and years that they published (unstructured data). The graph view at right shows a force directed layout of all the documents where vertex color corresponds to the number of connections to other documents. The landscape view shows a closeup of the center right portion of the graph. ThreatView is built on Sandia National Laboratories’ scalable Titan informatics toolkit, which promotes flexible design of targeted applications – all of which are architected to handle large data.

Current intelligence analysis faces significant problems arising from the ever-increasing scale and complexity of data. In addition, rapidly changing threats require nimble response from both analysts and their tools. Current analysis tools cannot easily adapt to changing threats and cannot scale to large data. A fundamentally new solution is required. Sandia National Laboratories presents ThreatView™, an innovative application designed for the changing nature of analysis.

In order to construct a coherent picture of events of interest, intelligence analysts “must master and exploit a raging flood of data from an ever-wider range of sources of information.” (source: George Tenet 2000) Email communications, publications, commercial transactions, and other ‘abstract’ information form the basis of many analysis tasks. These complex, heterogeneous data sets are often extremely large – tens of millions of items and hundreds of gigabytes of storage space – and are difficult to visualize in ways that may work well for smaller data. For example, how does one visualize the entire set of a corporation’s patents to determine areas of expertise? How does one study email communications to discover useful patterns, paths, or connections relevant to terrorist threats? How does knowing the ‘signature’ of one terrorist event help us discover and disrupt the next one? These types of questions are answered by investigating abstract data through queries and interactive exploration. Complex relationships in the data are often hidden or not obvious. To identify these relationships, analysts follow an iterative process of discovery and refinement, moving quickly and often between detailed inspection of just a few items and broader overviews of the whole of the data at hand. Analysis tools must scale in the same way, from thousands or millions of items to a handful and back again.

Sandia’s ThreatView is a scalable information visualization application that leverages Sandia’s expertise in scalable scientific visualization to promote exploration of extremely large datasets. ThreatView supports multiple linked views (see Fig. 1), which allow the user to explore information in several different ways at once. Linked selection between the views promotes investigation of the data by highlighting selected items in all views at once, so the user can quickly understand all aspects of the data at once. ThreatView’s architecture is flexible, promoting quick integration of a variety of external technologies – either through plugins, or by importing/exporting important file formats.

ThreatView is the next generation of capability that was first delivered in Sandia’s LDRDView application. Built on a scalable software architecture, ThreatView is designed to process, analyze and display datasets that cannot be viewed in currently available tools. This tool, funded in part by the PATTON program, will soon be delivered to external customers.

(Contact: David H. Rogers)
June 2007


CUBIT Advances Automation of Geometry Preparation and Meshing

As part of an ASC level 2 milestone, the CUBIT team has developed a new environment for guiding analysts through the process of generating a hexahedral or tetrahedral mesh for simulation.  This new environment is called the Immersive Topology Engine for Meshing (ITEM) and is built on the existing CUBIT Geometry and Meshing Toolkit.  New geometric reasoning algorithms have been developed that can detect potential problems in a CAD model and provide a list of suggested solutions.  This is offered in a wizard-like environment where the user may systematically step through the geometry preparation and meshing process and run a series of diagnostic tests.  The user is then presented with a list of solutions to specific geometric problems that can be easily previewed and performed at will.  With phase 1 of ITEM now complete, this tool can now generate diagnostics and solutions for a range of geometric problems including resolving small features, detecting and fixing imprint/merge problems and detecting potential decomposition options for hexahedral sweeping.  Scheduled to be completed August 2007, this new tool promises to dramatically improve the productivity of analysts who currently must spend considerable time developing meshes for simulation.

An example of ITEM used to mesh a CAD model representing a weapon component.  Left: ITEM detects and previews an option for decomposing the model for hexahedral mesh.  Right: The final hexahedral mesh after using ITEM to help prepare the CAD model.

(Contact: Steve Owen)
May 2007


Progress on SciDAC Climate Modeling

The DOE Scidac Climate Consortium project is focused on developing a highly scalable climate model with a full carbon cycle.  Scalability has motivated us to evaluate the use of cubed-sphere dycores in the atmospheric component (CAM), while the emphasis on carbon cycle modeling and the associated chemistry requires a dycore with a locally conservative, positive-definite advection algorithm.   As research on positive-definite dycores progresses, we have been evaluating CAM using the HOMME-SE (spectral element)  dycore.    Initial runs of CAM with HOMME-SE showed that the dissipation mechanisms in HOMME-SE, were far too viscous when there was a strong enstrophy cascade typical of realistic climate simulations.  To remedy this problem,  we implemented a hyper-viscosity term which replaces the conventional viscosity and the element based filter used in HOMME (motivated by the fact that hyper-viscosity has been used in CAM since its inception as CCM0).  This work required the development of a weak-form-divergence operator, which we also used in the continuity equation to achieve exact, local conservation of mass and exact conservation of total energy.    Both of these improvements have been tested in the shallow water equations and we are currently evaluating their performance in CAM-HOMME using the aqua planet model problem.  The shallow water results are shown in the following figures  (vortex breaking problem from Polvani et. al., JAS 1996).  We show the reference solution (shallow water version of CAM Eulerian) and two spectral element solutions.  All the simulations have the same number of points around the equator.  The spectral element solution shows that with hyper-viscosity we achieve results equivalent to the CAM solution, while with conventional viscosity, at the level necessary for a stable solution, is far too viscous.

Reference solution.  Potential vorticity on a lat-lon projection after 30 days of model time, computed with spherical harmonis at T85

Spectral element solution with our new hyper-viscosity

 

Spectral element solution with conventional viscosity

(Contact: Mark Taylor)
May 2007


Computational Models of Biofilm Growth in Water Distribution Networks

The economic and social consequences of the accidental or hostile contamination of a water distribution network are enormous.  We need quick, effective ways to remove pathogens.  But biofilms naturally occur on pipe walls in water distribution systems and make decontamination more difficult.  Biofilms are an aggregation of microorganisms that excrete a protective extracellular polysaccharide (EPS) matrix.  This matrix adsorbs dissolved cationic species and microorganisms, thus protecting the microorganisms from disinfection.  Biofilms also slough off, releasing adsorbed contaminants at unexpected times.  To develop effective strategies we need to understand the growth and adsorptive behavior of biofilms through computational models.

Figure 1: Computational simulation of biofilm growth.
On the left, Day 0, the biofilm is composed entirely of active material, red. On the right, Day 10, the composition has changed: less active material, red, and increased inert biofilm, light blue.  The aqueous solution is dark blue.  The faint contour lines indicate the relative amount of nutrients available in the domain, the top (red) contour line has the highest nutrient concentration and the bottom (blue) contour line has the lowest concentration.  As we expect, the biofilm becomes inert without sufficient nutrients.

Sandians in Department 1411 are developing computational models of biofilm growth, transport, and nutrient consumption in aqueous environments.  The models are based on Nihilo, a system for rapid development of high-performance parallel finite-element solutions of partial differential equations.  Partners in Center 6300 are providing experimental measurements for model validation.  Sandia’s biofilm models are capable of modeling multi-species biofilms with differing nutrient consumption mechanisms and growth mechanisms.  The models predict (i) the nutrient concentrations in the aqueous solution and biofilm through diffusion-reaction mechanism, (ii) the growth of the biofilm based on the nutrient consumption for each species, (iii) and the change in the biofilm composition over time.  Future simulations will include microorganism deposition from the aqueous solution.  Eventually we will couple these models with optimization capabilities in Nihilo and provide a computational capability to compute optimal decontamination strategies over a broad range of scenarios.

(Contact: Judy Hill )
April 2007


ALEGRA's In-Line Meshing Library Enables Billion Element Simulations

Serial generation of large finite-element meshes is a serious bottleneck for large parallel simulations. To surmount this barrier the ALEGRA shock and multi-physics project has developed a parallel mesh generation library that allows on-the-fly scalable generation of simple finite element meshes.  It has been used to generate more that 1.1 billion elements on 17,576 processors. The library operates on the assumption that the mesh generation process is deterministic. While each processor is provided with a complete specification of the mesh,  it only creates a full representation of the elements that will be local to that processor. Since each processor has a complete specification of the mesh, no inter-processor communication is performed.  The mesh generation proceeds through steps of decomposition, local element creation, and communication information generation. The final product of the library is a data structure that is passed off to ALEGRA in place of a mesh input file. Currently the library is limited to  generating meshes of domains with cylindrical, tubular, and block shapes. Substantial control is allowed over the element density. Boundary condition regions can be specified on the surfaces and interior of the mesh.


Cylindrical and brick shaped meshes generated with ALEGRA's in-line mesh capability.

(Contact: David Hensinger)
March 2007


A Software Lifecycle Model for Research-to-Production
Software Engineering

Development of production-quality software that begins as leading-edge scientific algorithms research is the charge of Sandia National Laboratories staff involved in enabling technologies for advanced computer modeling and simulations. In particular, development of solvers—a required element for many high-fidelity simulations—is an area where algorithmic advances are still critically important for future success, and yet any software that encodes these algorithms must eventually be production quality.

In this context, the Trilinos Lifecycle Model (TLM) recognizes the changing requirements for solver software engineering as work goes from proof-of-concept to production quality. The TLM is really a meta-model with three phases: (i) research, (ii) production growth and (iii) production maintenance. Each phase contains a set of requirements that allows developers to use the most appropriate tools and metrics for that phase. Furthermore, the TLM specifies a promotional event that must be completed in order to transition from one phase to the next.

All Trilinos package start out in the research phase. Presently most packages are in this phase since they are primarily focused on algorithm exploration and development. More mature package such as Epetra, AztecOO and IFPACK are in the production growth phase. No packages are in the production maintenance phase at this point since all are still evolving in design and capabilities, but the TLM lays out the requirements for production quality as packages mature into the production maintenance phase.

The Trilinos Lifecycle Model by J. Willenbring, M. Heroux and R. Heaphy was accepted for publication in the proceedings of the 29th International Conference on Software Engineering to be held in Minneapolis, MN in May 2007.

(Contact: Mike Heroux)
March 2007


ERDC has Chosen to Purchase a Second Machine Based on the Red Storm Architecture

The U.S. Army Engineer Research and Development Center (ERDC) in Vicksburg, Mississippi announced that they will upgrade their Cray XT3 to dual core processors and install an XT4 in late 2007. The XT3 and XT4 product lines are based directly on the Red Storm architecture. This purchase is a testimony to the successful investment made by the DOE/NNSA ASC program into the joint Sandia/Cray development. This architecture has proven viable and upgrade-able for a wide range of high performance scientific studies. Sandia provided expertise in the areas of system architecture, hardware and software design, and software development.

(Contact: Douglas Doerfler)
February 2007


Sandia Funded to Develop Risk Mitigation Software for ORNL's XT4 Supercomputer

Sandia received the first $100K of a $750K 12-month effort to provide risk mitigation software for the DOE-SC’s, Oak Ridge National Laboratory, Cray XT4 supercomputer. In the fall, that system will be upgraded to AMD's yet-to-be-released quad core Opteron CPUs. Sandia will be upgrading their Catamount Light Weight Kernel Operating System to support these new processors. The changes will also accommodate future technology releases of eight (or higher) CPU cores per processor.

Cray is on contract with ORNL to provide the XT4 hardware and software. Cray has chosen to introduce a radical software upgrade at the same time as the hardware upgrade. Thus, DOE-SC has wisely chosen to pursue this risk mitigation path with Sandia for the software.

(Contact: Douglas Doerfler)
February 2007


Algorithms that Enable Newton Coupling of Multi-Physics Simulations

Sandia National Laboratories has successfully developed many physics simulation codes, whose potential for impact is greatly increased when coupled together to enable high fidelity, complex multi-physics simulations (e.g., fluid-structure interaction problems).  Within this context, the codes have traditionally been coupled using "weak" solution methodologies based on successive substitution.  This involves computing a solution from one code while keeping variables from all other codes fixed and sequentially stepping through each code in this manner – a numerical procedure with unreliable convergence properties.

As part of our LDRD, “Multi-Physics Coupling for Robust Simulation,” we have recently developed more advanced coupling solution methodologies that improve robustness and rates of convergence, yet do so without placing additional burdens on the codes being coupled.  Our methodologies approximate Newton’s method, the gold-standard for nonlinear solution methods, using the same information from each code needed for weak coupling.  We have demonstrated improvements in both robustness and rates of solution convergence in real-world code-coupling environments for problems of interest to Sandia. The algorithms have been incorporated in the Trilinos nonlinear solver package NOX, taking advantage of Trilinos’ software quality infrastructure. 

One step of the verification process for our Newton-based algorithms is shown in Table 1. This table shows results for a prototype problem involving two linear problems whose interdependence is controlled by the coupling parameter, b.  The Newton-based approach achieves its theoretical performance: one-step convergence for all values of b. Already evident in this test problem is a marked improvement in algorithm robustness and time-to-solution of the Newton-based algorithm compared to weak coupling.

ß

Newton-based

Weak

0.40

1

33

0.45

1

66

0.49

1

253

0.60

1

FAIL

Table 1: Number of coupling iterations required for convergence of a coupling verification problem involving two coupled
linear problems whose interdependence is controlled by parameter, ß.

The application of this technology has continued, under ASC Algorithms funding, to impact state-of-the-art application codes where we have deployed our strong coupling algorithms within the Sierra framework.  Figure 1 shows results for a MEMS thermal actuator simulation requiring solution for coupled electric potential, temperature, and material displacement fields.  For a mild voltage load, our Newton algorithm achieves rapid quadratic convergence rates, as compared to only linear convergence associated with traditional weak coupling.  At higher voltage loads, the weak coupling fails while our strong coupling algorithm continues to obtain converged solutions (not shown).

Figure 1: Convergence behavior of Newton (strong) coupling and weak coupling algorithms for simulation of a MEMS Actuator device. The model involves Electro-Thermal-Mechanical coupling solved using the Aria code within the Sierra framework.

A more complicated example is shown in Figure 2, where now the coupling is between two codes that use different discretization methods to solve fundamentally different physics, and that are coupled at a shared interface.  The compressible flow code Premo uses a finite volume discretization to simulate flow and heat transfer over an airfoil, while the thermal code Calore utilizes a finite element discretization to solve for heat transfer within the airfoil. With this multi-physics model, the temperature and heat flux fields along the surface of the airfoil are solved for naturally as part of the simulation, not artificially specified by a boundary condition.

The above work highlights the success of our strategy of seeing our fundamental algorithm research through to deployment in application codes, not only in impacting the applications but also in generating new and relevant algorithmic research ideas. This problem motivated research into a variant of the strong coupling algorithm tailored to systems where the problem coupling involves considerably fewer unknowns than the whole system, e.g. at an interface.

Figure 2: Mach 3 Shock over a thermally interacting airfoil.  Simulations involve coupled compressible flow and heat transfer using the Premo and Calore codes in Sierra.

In addition to improving robustness and computational efficiency of running a multi-physics simulation, the Newton algorithm supplied by the strong coupling algorithm is the kernel of most sophisticated analysis and design capabilities: sensitivity analysis, parameter continuation, stability analysis, global error control, and PDE-constrained optimization.  Our work therefore opens the door to additional impact as an enabling technology of these capabilities for coupled applications.

(Contact: Russell Hooper)
February 2007


Sandia National Labs and Ohio State University Receive Best Algorithms Paper Award at IPDPS07

Researchers from Sandia National Laboratories and the Ohio State University were awarded the Best Paper award in the Algorithms track of the 2007 International Parallel and Distributed Processing Symposium (IPDPS).

Their paper, "Hypergraph-based Dynamic Load Balancing for Adaptive Scientific Computations," presents a novel algorithm for redistributing data in adaptive parallel simulations.  As an adaptive simulation's computational requirements change, the algorithm rebalances processor workloads while keeping interprocessor communication costs and data redistribution costs low.  The new method exploits the robust and accurate hypergraph partitioning model to reduce average total communication costs by roughly 20% compared to traditional graph repartitioning methods.  The algorithm will be released this winter in the Zoltan Parallel Data Management Toolkit, open-source software available at http://www.cs.sandia.gov/Zoltan.  Authors of the paper are Umit Catalyurek and Doruk Bozdag of the Ohio State University, and Erik Boman, Karen Devine, Robert Heaphy, and Lee Ann Fisk Riesen of Sandia National Laboratories.

IPDPS is a highly competitive international conference (sponsored by IEEE) covering all aspects of parallel computation, including algorithms, applications, architectures, and system software.  Only 109 out of 419 total submissions were accepted and the award-winning paper was judged best of the submissions to the Algorithms track.  The award will be presented March 28, 2007, in Long Beach, CA.

(Contact: Karen Devine)
February 2007


Peridynamics Predictions of Delamination in Fiber Reinforced Composites Used by Boeing to Design Better Aircraft Fuselage

Staff from Department 1435, led by Stewart Silling, and Boeing Company’s Phantom Works applied Silling’s Peridynamics theory to predict the damage to laminated composite panels due to impact by hail. In this work, supported under Sandia's Umbrella CRADA with the Boeing Company, many different possible combinations of lamina properties were modeled with Sandia's EMU code, which is an implementation of the Peridynamics theory. These computational results were subjected to a statistical analysis that helped reveal which composite layups would provide maximum damage tolerance at reduced weight for fuselage materials on future aircraft such as the Boeing 787, now under development. A typical damage prediction is illustrated in Figure 1, which shows the delaminations within a laminated composite due to impact by a spherical hail particle. The coloring indicates separation (increasing from blue to red) between adjacent plies in the laminate.

Continuum scale simulation of damage and failure has been dramatically advanced by Silling’s fundamentally novel, mesh-free approach to computational solid mechanics. In this approach, the conservation relations are cast as integral equations that make the theory inherently capable of simulating defect and crack development. The technique treats crack growth consistently with other forms of material deformation and failure. Peridynamics permits cracks to nucleate and grow spontaneously and unguided, providing a breakthrough in continuum simulation methods that allows EMU to model complex patterns of damage and fracture. The technique is finding increasing application to investigating, characterizing, and understanding fracture, penetration, blast, and fragmentation phenomena. It has demonstrated an amazing level of verisimilitude, reproducing well known dynamic fracture phenomena in a predictive manner.

In an active collaboration, Boeing Phantom Works is supporting on-going development of the EMU code and Peridynamics theory, along with applications of EMU simulations to investigations of damage of composite structures.

Side View

Plan View

Figure 1. Delamination pattern predicted by EMU.

(Contact: John Aidun )
February 2007


National Institutes of Health’s National Heart, Lung, and Blood Institute Award to Elebeoba May

Elebeoba May (1412) was awarded a Mentored Quantitative Research Career Development Award by the National Institutes of Health’s National Heart, Lung, and Blood Institute. The prestigious five year grant, the first of its kind at Sandia, will support Elebeoba to receive training in the molecular aspects of infectious diseases and to participate in a mentored research program focused on understanding the genetic basis of Mycobacterium tuberculosis latency and reactivation. The research objective is to quantitatively identify, model and analyze genetic networks, metabolic networks, and immune response pathways that are involved in the Mtb latency and reactivation process.

Specific project aims include: 1) Determination of host-Mtb gene expression during a murine model of Mtb infection and subsequent development of latency; 2) Development of a quantitative model of Mtb and host during infection relating gene-level expression to metabolic pathways, and metabolic pathways to cellular immune response; 3) For a given microarray expression profile, simulate and quantify host-pathogen interactions for Mtb-mouse immune response during latency and reactivation. The quantitative models, which will be constructed using the BioXyce platform developed by Rich Schiek (1437) and Elebeoba May (1412), will be used to perform virtual Mtb knockout experiments to identify genes that influence latency and reactivation. In vivo anti-sense knockout experiments will be performed to validate whether the genes identified by the quantitative model impact latency and reactivation. This work will augment current understanding of latency and reactivation in Mtb, a pathogen that causes significant morbidity worldwide. Research will be conducted collaboratively with Rick Lyons of the University of New Mexico Health Science Center/ School of Medicine and Alan Perelson of Los Alamos National Laboratories.

(Contact: Mark D. Rintoul )
January 2007


Red Storm computer exercise by Mark Boslough (1433) is one of Discover Magazine's "Top Science Stories of 2006"

#81 Tut Jewel Formed By Asteroid Impact

The central jewel in King Tutankhamen's pectoral gear may have been literally out of this world—the result of an asteroid that exploded above the Sahara—according to Mark Boslough, a researcher at Sandia National Laboratories. In 1998 researchers elsewhere determined that Tut's jewel wasn't chalcedony but an unusual type of desert glass. Boslough, who was part of a team that created a computer simulation to predict what would happen when comet Shoemaker-Levy hit Jupiter, was later enlisted to figure out if the gem was meteoric in origin.


Boslough used Sandia's Red Storm supercomputer to ask what conditions would be required to melt Saharan sand into glass. The winning scenario: a 400-foot-wide stony asteroid that slammed into the air at 12 miles a second and exploded. For 20 seconds the resulting fireball would have been hot enough to melt quartz on the ground, creating glass that can still be found in the desert. Ancient Egyptians might have rightly recognized such ornaments as more precious than gold.    

(Contact: Mark Boslough )
January 2007


Newsnotes | Info and Events (internal - SNL only) | Open-Source Software Downloads | Privacy and Security
Sandia National Laboratories Home Page - External or Internal (SNL only)

Maintained by: Bernadette Watts
Modified on: May 6, 2008