skip to: onlinetools | mainnavigation | content | footer

ASC @ Sandia

ASC Logo

Contacts

ASC Program Director
James Peery
(jspeery@sandia.gov
)
(505) 845-9490

ASC Communications
Reeta Garber
(ragarbe@sandia.gov
)

Related Links
ASC Management

ASC Focus Areas

Software Quality Engineering (SQE)

CPT and Milestones

UQ V&V Seminar Series

ASC News

ASC News stories

Call for Posters

* Submission Deadline: July 31, 2006 *

The SC06 conference solicits submissions for posters displaying cutting-edge research in high-performance computing and networking. Posters at SC06 will occupy a prominent viewing location throughout the conference. An evening Posters Reception will feature the posters and allow conference attendees to discuss the displays with poster presenters. Posters are an excellent way to convey ideas and results not developed fully enough for technical publication. Posters offer a means of presenting timely research in a more casual setting, with the chance for greater in-depth, one-on-one dialog that can proceed at its own pace.

The Posters Committee will select posters of exceptional technical quality, originality, relevance, and clarity. Abstracts of 150 words for accepted posters will be included in the SC06 CD-ROM Conference Proceedings. A prize will be awarded for the "Best Poster;" this award will be announced during the SC06 Awards Session and will be based on both poster content and presentation. All poster presenters must register for the SC06 technical program. If your poster submission is accepted for inclusion in the conference you will be sent a set of guidelines/instructions for poster preparation and presentation, to which you must adhere.
The Submissions Website, www.sc-submissions.org, is now open. Poster submissions are due July 31, 2006.


Alexander Applauds House Action on Supercomputing Bill
Legislation now goes to President for his signature

WASHINGTON – U.S. Sen. Lamar Alexander (R-TN) today commended the House of Representatives for passing the High-End Computing Revitalization Act of 2004.

The bill authorizes $165 million over the next three years to the Department of Energy’s Office of Science to support high-performance computing. It also authorizes the Secretary of Energy to establish and operate a leadership facility for high-performance computing and establishes a software development center to support the software needed for high-end computing platforms. Earlier this year the Energy Secretary announced Oak Ridge National Laboratory would be the site for the leadership class facility.

“The passage of this bill is good news for Oak Ridge and the entire nation,” said Alexander. “While the U.S. has recently recaptured the lead in high-performance computing from Japan, our latest high-performance supercomputers are not generally available to the broader scientific community. This legislation will authorize DOE to pursue a leadership class facility at Oak Ridge that will be made available to all U.S. scientists, including our nation’s industries, based on a rigorous peer review process.”

Recently, the BlueGene/L supercomputer at Lawrence Livermore National Laboratory, which is devoted to nuclear weapons simulations, took over the top spot on the list of most powerful supercomputers from Japan’s Earth Simulator. NASA’s Columbia supercomputer, which is devoted to NASA missions, became the second most powerful supercomputer in the world. While these supercomputers represent significant advances for the U.S., these systems are not generally available to the broader scientific community. The BlueGene/L supercomputer is nearly 2 times more powerful than Japan’s Earth Simulator, and NASA’s Columbia supercomputer is nearly 1.5 times more powerful than the Earth Simulator.

The National Research Council of the National Academy of Sciences released a report last week urging the federal government to accelerate advances in supercomputing and recommended a need for greater focus on software development for supercomputers.

Reps. Judy Biggert (R-IL), Lincoln Davis (D-TN) and Bart Gordon (D-TN) are House sponsors of the bill, H.R. 4516. The Senate bill was sponsored by Sens. Alexander, Jeff Bingaman (D-NM), Norm Coleman (R-MN), Mark Dayton (D-MN), Mike DeWine (R-OH), Maria Cantwell (D-WA), Patty Murray (D-WA) and Tom Harkin (D-IA). It was amended in the Senate to include the establishment of a software development center.

The legislation now goes before President Bush for his signature to become law.


Winners Announced For HPCwire Choice Awards

HPCwire, the publication of record for high performance computing, has announced the winners of the 2004 Reader's Choice Awards. The second annual polling of the HPCwire’s global readership produced the winners of The Reader’s Choice Awards, and the world class collection of contributing editors selected the winners for The Editor’s Choice Awards.

The awards will be presented to the executives leading these prestigious firms at the SuperComputing 04 (SC04) Conference and Exhibition being held in Pittsburgh, PA. The award ceremony will be at the HPCwire booth (921) on the trade show floor during the Gala Evening on November 8, 2004 beginning at 7pm and running until the show closes at 9pm.

Following is the list of winners of the HPCwire 2004 Reader's Choice Award:

HPCwire 2004 Reader’s Choice Awards

Most Important Emerging Technology:
Most Important Software Innovation:
Most Innovative HPC Technology:
Most Innovative Implementation:
Most Innovative—Storage Technology:
Most Innovative—Storage Technology:
Greatest Price Performance—Storage:
Most Innovative—Hardware Technology:
Greatest Price Performance—Hardware:
Most Innovative—Software:
Greatest Price Performance—Software:
Most Innovative—Visualization:
Greatest Price Performance—Visualization:
Most Innovative—Networking:
Most Innovative—Networking:
Greatest Price Performance—Networking:
Most Innovative—Cluster Technology:
Greatest Price Performance—Cluster:
Best Collaboration Government And Industry:
Cray—Red Storm
NPACI—Rocks
Cray—Red Storm
PSC—Red Storm
Panasas (tie)
SGI (tie)
HP
Cray
SGI
NPACI—Rocks
Pathscale
SGI
AMD
Myricom (tie)
Mellanox (tie)
Intel
Linux Networx
Dell
ORNL & Cray

Following is the list of winners of the HPCwire 2004 Editor's Choice Award:

HPCwire 2004 Editor’s Choice Awards

Most Important Emerging Technology:
Most Important Software Innovation:
Most Innovative HPC Technology:
Most Innovative Implementation:
Most Innovative—Storage Technology:
Greatest Price Performance—Storage:
Greatest Price Performance—Storage:
Most Innovative—Hardware Technology:
Greatest Price Performance—Hardware:
Greatest Price Performance—Hardware:
Most Innovative—Software:
Greatest Price Performance—Software:
Most Innovative—Visualization:
Greatest Price Performance—Visualization:
Greatest Price Performance—Visualization:
Most Innovative—Networking:
Greatest Price Performance—Networking:
Most Innovative—Cluster Technology:
Greatest Price Performance—Cluster:
Best Collaboration Government And Industry:
Cray—Red Storm
NPACI—Rocks
Cray—Red Storm
NASA/Intel/SGI
EMC
Sun (tie)
Data Direct (tie)
SGI
IBM (tie)
NEC (tie)
MSC Software
The Portland Group
SGI
SGI (tie)
Sun (tie)
Quadrics
Juniper
Linux Networx
Linux Networx
NASA/Intel/SGI

All winners will be formally presented with their Reader’s and Editor’s Choice awards on the SC04 show floor, and details of their technologies will be subsequently covered by either LIVEwire, HPCwire’s annual 3-edition breaking-news special distributed electronically during the event itself, or in regular editions of HPCwire.

(return to News page)


Sandia Labs’ Supercomputer Based on Plan Hatched in 1922

By John Fleck

Albuquerque Journal Staff Writer

Early 20th century British scientist Lewis F. Richardson would almost certainly not recognize “Red Storm,” the massive supercomputer being assembled at Sandia National Laboratories.

But Red Storm is a remarkable execution of an idea proposed nearly a century ago by Richardson—World War I ambulance driver, mathematician and Renaissance man.

When it is fully operational late this year or early next, Red Storm will likely be the fastest computer the world has ever seen, returning to the United States a title that the Japanese “Earth Simulator” took in June 2002.

Richardson just wanted a way to forecast the weather, and he could not possibly have imagined much of the computer technology at Red Storm’s heart.

But the basic concepts he envisioned—a way for many calculations to swiftly be shared and integrated into a useful whole—lie at the heart of the machine to be built at Sandia.

Cray Inc. is building the $90 million Red Storm for Sandia's nuclear weapons work. It will use 11,648 computer chips identical to the ones that power desktop computers.

The chips will be tied together in an architecture that bears a striking resemblance to Richardson's idea.

“The guy was so far ahead of his time,” said Dave Gutzler, a University of New Mexico climate researcher and aficionado of Richardson's work. “Almost nobody understood the significance of what he was doing.”

To forecast the weather, Richardson carved up Earth's atmosphere into a three-dimensional grid and developed equations needed to describe the movement of heat, air and water.

For three years, using his spare time while driving his World War I ambulance, he did the laborious calculations necessary to forecast a single day's weather.

Even now, with nearly a century’s advances in meteorology, it remains clear that Richardson got the basic physics right, Gutzler said.

But taking years to calculate a single day's forecast is obviously not a practical approach to the problem, noted Barney Maccabe, director of the University of New Mexico’s Center for High Performance Computing.

Richardson did not imagine the electronic computer. Instead, the solution he imagined was a vast auditorium with 64,000 people, each doing calculations, passing results to their neighbors, then calculating anew.

“He compared it to a symphony orchestra, with a director down in the middle coordinating everything,” Gutzler said.

Fast forward to the 21st century, when supercomputer builders tie together vast numbers of individual computer chips into massive machines.

But most problems take more to solve than simply having a lot of computer chips. Eventually, each computer chip needs to share data with other chips elsewhere in the machine—like Richardson’s people passing bits of paper around.

And that can become an enormous bottleneck in supercomputer performance. Each millisecond a chip spends waiting for data is a millisecond of wasted performance.

“All those things slow down computing,” he said.

For 21st-century supercomputers, the problem is compounded by the fact that increases in communication speed have lagged behind increases in the speed of the computer chips that do the calculating.

“Communication does not and has not kept up,” Maccabe said.

To solve the communication problem, Red Storm returns to the simple idea pioneered by Richardson—passing data to your neighbors.

Red Storm is the latest in a line of supercomputers the U.S. Department of Energy has been building at its three nuclear weapons laboratories since the 1970s to help design and maintain U.S. nuclear weapons.

That program has been in high gear since the mid-1990s, as the U.S. nuclear weapons laboratories attempt to substitute computer simulations for underground nuclear tests, banned since 1992.

At any given time, it is not uncommon for one of the lab’s computers to hold the title of “world’s fastest computer.”

Most recently, it was Lawrence Livermore National Laboratory in California, which held the title for 18 months, beginning in November 2000. Before that, a machine built for Sandia by Intel held the title from June 1997 to November 2000.

The title moved across the Pacific when the Earth Simulator came on line in 2002, but Sandia and Cray appear poised to bring it back.

The Earth Simulator’s solution to what supercomputer engineers call “the message-passing problem” was to build a remarkable spider web of cables that funnel each bit of data through a massively expensive central switch.

It is as if a central post office had been set up, with each message routed through its central office and then back out to the person who needs to receive it.

Red Storm’s solution, as Sandia supercomputer program manager Bill Camp explains it, looks a lot more like Richardson’s passing papers among neighbors.

For the kind of simulations Sandia wants to do, it turns out that works nicely, Camp said. Some messages take a while being passed from neighbor to neighbor to get where they are going, but most do not need that.

“It turns out for most of our problems, the communications are between near neighbors,” he said.

A specialized communications chip called “SeaStar” was built to handle the neighbor-to-neighbor chatting.

It is that distribution of calculating labor and neighborly sharing that makes Red Storm, and other supercomputers being built on the same model, look remarkably like what Lewis Richardson proposed in 1922.

“It’s a beautiful metaphor for how a digital computer works,” Gutzler said.

(return to News page)


Sandia’s ‘Computing district’ thrives in Area 1

Welcome to the “computing district.” JCEL’s location, south of Bldg. 880 and near three relatively new buildings, is the most recent example of how Sandia planners are grouping by district. “This is a concept that goes back to the Greeks,” says Sandia’s Site Architect Roy Hertweck of Planning and Project Development Dept. 10853. “It’s a loose planning concept we’ve been using at Sandia for at least the past 10 years. We try to fit buildings with similar hazards, infrastructure needs, security requirements, and work adjacencies where programs can work together.” The district includes: the Supercomputing Annex (SCA), the Integrated Network Security and Reliability Center (INSRC), a new Central Utilities Building (CUB), JCEL, and Bldg. 880, home of Sandia’s resident supercomputer, “ASCI-Red Tera-Flops.” SCA will house the new generation “Red Storm” supercomputer, says George Connor, Manager of Computer Operations Dept. 9335. With 10,000 computer processors configured in end-to-end cabinets, the coming Red Storm will operate at speeds of 40 trillion operations per second and higher. Red Storm needed a very large “free span” building (without internal columns) to allow users to configure the computer in unrestricted space. Project Manager Bill Hendrick (10824) worked with users on electrical, cooling, and free-span requirements for the new building, instead of trying to install the machine in Bldg. 880. MV Industries worked with Sandia to design and build the new building for just under $5 million. The increasing load and complexity of Sandia’s three computing networks led to INSRC, built to integrate Sandia computer and network specialists, the Corporate Computer Help Desk, network security, computer technicians, and computer security specialists. “INSRC puts all the right people in the right place with the right tools,” says Jim Smith, project manager (10824.) Summit Construction completed the facility for Sandia.
(See also Lab News, April 2, 2004, page 6 story on Cyber Enterprise)

(return to News page)


Reports will urge supercomputer funding increase

EE Times story by Rick Merritt

August 11, 2003

San Jose, Calif. - Two government reports that could be published as early as this week are expected to lobby budget makers for more money for the design of custom supercomputer technologies. The reports mark a milestone, as researchers turn away from clusters of systems based on off-the-shelf microprocessors and instead move toward significant investment in custom processor and interconnect designs that characterized an earlier era of high-performance computers.

The National Academy of Sciences (NAS) is expected to publish an interim report tomorrow on supercomputing, although details of the study are being kept under wraps and a full report is not expected until later next year, according to industry sources. A separate report, by the so-called Jasons group of top researchers that periodically consults with the U.S. government on technology issues, is undergoing final editing.

"Both reports are coming from a congressional mandate from the energy appropriations committee, and in many ways are a reaction to the Japan Earth Simulator, which became the world's fastest computer at a time when [the United States] was spending hundreds
of millions on clustered systems," said William Dally, a professor of computer science at Stanford University who is taking part in both studies.

Industry insiders said the Jasons report is more likely to produce concrete results than the NAS report, in part because it is more narrowly focused than the NAS study. The Jasons group was asked to determine if the Department of Energy's program for maintaining the U.S.'s nuclear stockpile was on target, particularly with regard to the Accelerated Strategic Computing Initiative for high-performance computers that handle nuclear simulations.

The Jasons group doesn't conduct program reviews as such, but "this steps about as close to that as we can," Dally said.

Industry observers said the NAS report is expected to provide less detailed recommendations than the Jasons study, for the simple reason that its scope is much broader than the Jasons' project. The NAS report will offer recommendations on the direction of supercomputing in the United States, but those recommendations won't be finalized for at least another year. The interim report due Tuesday represents an effort to influence this year's government budget cycle.

The two studies resulted, in part, from NEC Corp.'s May 2002 announcement of the Earth Simulator, a custom-built supercomputer that delivers 35.8 teraflops. That system packed five times the performance of the fastest U.S. supercomputer at that time, a Lawrence Livermore National Laboratories machine. Today, a Los Alamos National Laboratory system built by Hewlett-Packard Co. hits about 13.8 Tflops, still well behind NEC's benchmark.

"The Earth Simulator created a tremendous amount of interest in high-performance computing and was a sign the U.S. may have been slipping behind what others were doing," said Jack Dongarra, a professor of computer science at the University of Tennessee who is working on the NAS report.

The Earth Simulator was unique at the time of its launch. It used custom vector processors with custom memory and processor interconnects providing high memory bandwidth for its 5,150 processors that were organized into 600 nodes of eight CPUs each. By contrast, today's second-most-powerful system, the Los Alamos computer, uses 8,000 off-the-shelf processors to achieve only about a third of the performance of the NEC system, in part because it uses more commercial interconnects from Quadrics Ltd. (Bristol, England).

The reports come on the heels of recent congressional testimony warning that the United States is falling behind in supercomputing.

(return to News page)


Annual ASCI Alliance Center site visits by the ASAP-CRT representatives

August 19 - 10 AM University of Utah (contact - Dav de St. Germain)
August 20-22 no date or time scheduled yet for University of Illinois
(contact - Robert Fiedler),
University of Chicago (contact - Carrie Eder, Anshu Dubey)
August 26 Afternoon? Stanford University (contact -Masimillano Fatica)
August 27 3 PM Cal Tech (contact - Sharon Brunett)

The ASCI Center staff who would be visiting the sites are:

LANL - Rob Cunningham is a computer scientist and technical consultant for the HPC computers. He is also the representative for the Alliance Centers on the LANL Open Production Computing Advisory Board (OPCAB). He schedules time for the Alliance Center runs on "QSC".

LLNL- Jean Shuler, Barbara Herron, Blaise Barney

Jean Shuler is the Deputy Division Leader for the Services and Development Division (and is a LC hotline consultant).

Barbara Herron is the Team Leader for the LC HPC consultants (and is a LC hotline consultant). Barbara schedules dedicated weekend jobs for the Alliances.

Blaise Barney is the ASCI trainer and teaches MPI, totalview, etc. classes.

SDSC - Amit Majumda works in the Scientific Applications Group at SDSC. He has his Ph.D. in Nuclear Engineering and Scientific Computing and has extensive knowledge in neutral particle transport algorithms using Monte Carlo methods. He is also interested in performance tuning of codes for current MPP architecture machines and parallel linear algebra software. He teaches parallel computing classes at SDSC and at the San Diego State University.

SNL - Barbara Jennings is a HPC technical lead and instructor for the Sandia High Performance Computing machines.

(return to News page)


Be There Now: Interactive Remote Visualization Hardware Team

This team has developed a prototype hardware system that allows engineers/scientists interactive access to supercomputing visualizations generated a continent away.

Karl Gass, Lyndon Pierson, Ronald Olsberg, John Eldridge, Perry Robertson, Thomas Pratt, Thomas Tarman, Authurine Breckenridge, Jason Hamlet, Larry Lee Pucket

(return to News page)


SC2002 ASCI Tri-Lab Networking Team

This team was in charge of designing and implementing the networking infrastructure for the ASCI Tri-Lab Research Booth at SC2002. Frank Bielecki, Wayne Butman, Dennis Bateman, Diane Eichert, Vicki Williams, Parks Fields

(return to News page)


Salinas Development and 2002 Gordon Bell Award Team

Sandia’s massively-parallel structural dynamics simulation code, SALINAS, was one of five winners of the prestigious 2002 Gordon Bell Award, awarded at the 2002 SuperComputing Conference. Kendall Pierson, David Day, Garth Reese, Manoj Bhardwaj, Kenneth Alvin, Timothy Walsh

(return to News page)


Calore and Presto Weapon Safety Codes Development and Demonstration Team

Sandia’s Calore and Presto Weapon Safety Codes Development and Demonstration Team developed, debugged, demonstrated, and delivered the new state-of-the-art Calore and Presto computer codes for enhancing weapon safety. To demonstrate their capabilities, the team used them to perform the highest fidelity calculations ever performed of the accidental drop of the W80 mod 0/1 and the thermal behavior of the W80 mod 0/1 in a fuel fire. The codes are now in production use on the W80 Mod 3 and W76 SLEPs. The thermal analysis was the highest fidelity weapon in a fire simulation ever performed. It investigated the behavior of the W80 in a pool fire. One simulation analyzed the weak link/strong link thermal race and another analyzed the formation of a hot spot on the explosive surface. The calculations used over a million elements to accurately represent the geometry of the W80. It modeled the melting of the weapon cover, melting and recession of foam, radiation transport, and thermal conduction, using hundreds of thousands of CPU hours on Lawrence Livermore’s ASCI White supercomputer for the analysis. This effort uncovered and corrected several bugs resulting in a reliable code that is now analyzing the W80 Mod 3 and W76.

Team leader: Steven Kempka (09113)

Sandia team members: Bruce Bainbridge (9116), Barry Boughton (9116), Steven Bova (9141), Kevin Brown (9231), Kevin Copps (9143), Henry Duong (9127), Harold Edwards (9143), Micheal Glass (9141), Robert Gross (12333), Arne Gullerud (9142), Kenneth Gwinn (9126), Eugene Hertel, Jr. (9116), Roy Hogan, Jr. (9116), Joseph Jung (9127), James Koteras (9142), Randall Lober (9141), Rodney May (9126), James Stewart 09143), J. Michael McGlaun (9140), Harold Morgan (9120), Arthur Ratzel (9110)

(return to News page)


Computer Code Evaluation Group

The Technical Evaluation Panel (TEP) is a team of senior weapons personnel chartered to provide advice to the DOE Office of Security Policy on classification policy. Recognizing the complexity of classification issues related to computer codes, the TEP established a Computer Code Evaluation Group (CCEG) to provide advice in this area. This team includes experts in codes, nuclear weapons design, proliferation issues, and classification policy from Lawrence Livermore, Los Alamos, and Sandia national laboratories. Two years ago the CCEG was tasked to review classification policy related to simulation codes that have legitimate unclassified applications but which have some level of utility for simulating nuclear weapon performance. Their intensive effort focused on identification of classes of code capability to be protected, and recommendations for actions to develop new protection levels. In April 2002, their recommendations were endorsed by the TEP. The CCEG did exceptional work in evaluating the codes and their relevance to proliferation. The depth of understanding they brought to this problem and the clarity of their analysis and recommendations were outstanding, and worthy of an award for “notable performance, dedication, or contribution.”

Team Leader: Randy Christensen (LLNL)

Team members: David Brown (LLNL), Jay Brown (LANL), Bruce Green (12225), Richard Krajcik (LANL), Douglas Post (LANL), William Quirk (LLNL), Robert Thomas (9904)

(return to News page)


SIMBA Software Development Team

The SIMBA (Software Development Team )Software Development Team is receiving this award for an innovative and customer-focused approach to developing SIMBA (Software Manager and Builder for Analysts). SIMBA facilitates the building and management of complicated finite element models of weapon systems. Although only two years old and still under active development, SIMBA has been deployed for use in unclassified and classified environments at Sandia. Analysts in 8700 are using it to construct all of the abnormal structural environment simulations for the W80-3 being run on ASCI White at LLNL. It has also recently been used with models of the B83 and W76. Innovative SIMBA features including multiple model and simulation management, complete input file generation, model archiving and sharing, rapid mesh visualization and joining, and model quality assurance checks are saving analysts large amounts of problem setup time and reducing simulation errors due to incorrect setup.

Sandia team members: Ernest Friedman-Hill, Robert Mariano, Robert Whiteside, Andrew Rothfuss (all 8964)

(return to News page)


W76-1/MK4A JT4A-2B Normal Environment and Model Validation Test Team

The W76-1/MK4A JT4A-2B Normal Environment and Model Validation Test Team is receiving its award for exemplary effort in the successful completion of the W76-1/MK4A JT4A-2B test series under extremely tight schedule constraints. The JT4A- 2B Normal Environments Test was the first high-fidelity, systemlevel test supporting qualification of the MK4A reentry body (RB) design under the W76-1/MK4A Life Extension Project. Vibration and shock environments specified in the W76-1 Stockpile-to- Target Sequence (STS) document were applied and controlled at the aft end of the JT4A-2B body, and response measurements were made at critical locations within the test body. The team met the test objectives, which included collecting data for developing component environment specifications, defining follow-on dynamic response test environments, confirming pre-flight ground qualification for the DASO-18 flight test, validating structural dynamics models using the ASCI code SALINAS, and evaluating differences in the dynamic response of the W76-1 system relative to the W76-0 system.

Team leader: Scott Klenke (9125)

Sandia team members: Luis Abeyta (9134), Jimmy Aldaz (2132), Thomas J. Baca (9125), Vesta Bateman (9126), Brad Boswell (2132), Frederick Brown (9126), Reyes Chavez (2132), David Clauss (9127), Ronald Coleman (9122), Neil Davie (9134), Larry Dorrell (9125), David Fordham (9813), Anthony Gomez (9125), Danny Gregory (9122), Randy Harrison (2132), Dennis Helmich (2132), Thomas Hendrickson (2132), Ronald N. Hopkins (9125), David Kelton (9125), Paul Larkin (9127), Jose Montoya (2132), Michael Nusser (9122), Christian O'Gorman (9125), Charles Olguin (9122), Harold Radloff (2132), Nathaniel Roberts (9125), Dan Scott (2132), Dale Shamblin (9134), Todd W. Simmermacher (9124), D. Gregory Tipton (9125)

External team member: James E. Freymiller (9125 Contractor), John Laing (9126 Contractor)

(return to News page)


Sandia’s homegrown Xyce software gains notice in world of modeling electrical circuits

Xyce ran the largest analog full circuit simulation ever in May experiment

Sandia’s homegrown four-year-old Xyce™ software is gaining notice in the world of modeling electrical circuits.

Late last month the electric circuit simulation code ran a 14,336,000-analog-device problem, using 1,024 processors of Lawrence Livermore’s ASCI (Accelerated Strategic Computing Initiative) White IBM computer. It is believed to be the largest analog circuit simulation ever done, and it was conducted on the largest number of concurrent processors ever used for circuit simulation.

The accomplishment was part of a scaling study in support of an ASCI milestone, and the developers are convinced they can build a simulation code that can model even faster.

An interdisciplinary team from Depts. 9233, 8205, and 1734 began work on Xyce in July 1999 to develop an electrical modeling code that better meets Sandia’s needs. Currently, Sandia’s circuit simulation community relies mainly on a commercial code, PSpice (note the rhyming with Xyce), which operates sequentially, and hence, not as rapidly for large-scale circuit problems. Furthermore, Xyce gives Sandia the ability to simulate circuit problems of unprecedented size.

“We had our specific needs — like some of our device models have to support environmental effects [e.g., radiation], which no commercial cir-cuit simulator supports,” says Scott Hutchinson (9233), technical lead. “Also we needed a simulation code that ran faster.”

The team determined that SPICE-based codes with enhancements could not meet Sandia’s needs for parallel computing.

So they created Xyce, a parallel code in the most general sense of the phrase — a message passing parallel implementation that allows it to run efficiently on the widest possible number of computing platforms, including serial, shared memory, and distributed-memory parallel as well as heterogeneous platforms. Also the team paid careful attention to the specific nature of circuit simulation problems to ensure that the optimal parallel efficiency is achieved even as the numbers of processors grow.

“Xyce is still maturing and is in the early development stage” says Sudip Dosanjh, Manager of Computational Science Dept. 9233. His department consists of experts in algorithms, numerical methods, code development, and electrical engineering.

Version 1.0 was first released in October 2002. A second version, 1.1, was released last week. Xyce 2.0 will be released in October 2003. Since version 1.0, Scott and his team have made 68 “bug-fixes” and enhancements.

Xyce provides a modern, in-house simulation tool on which to build future enhancements targeted at the design and analysis needs of Sandia’s electrical design community.

The fast modeling of electrical circuits is useful in two ways. First, it gives engineers a leg up in designing electrical devices. They can start with an initial design created through the simulation that they can build and improve on — saving a lot of time in the design phase. It will provide a circuit-modeling tool capable of running efficiently on high-performance parallel computers using state-of-the-art algorithms. Second, modeling can be used to analyze existing circuits to determine if they are functioning correctly. Currently there are 10 Sandia users of Xyce.

Xyce is the main code of a larger project, High Performance Electrical Modeling Simulation (HPEMS), funded primarily by DOE’s ASCI Application program. Among HPEMS’ goals are to support Sandia-specific circuit models, include a consistent designer interface, produce an efficient parallel implementation on a variety of architectures, and implement improved, scalable algorithms that address SPICE convergence problems.

“Xyce has been a great collaboration between Centers 1700, 9200, and 8200,” says Steve Wix, project leader for HPEMS. “Centers 1700 and 8200 provide expertise in device models and applications while 9200 has provided expertise in numerical methods and computational science.”

To illustrate how far electrical circuit computing has come in four years, Scott compared what could be done then and what can be done now.

“In 1999 it took 233 hours to simulate 79 circuits on a Pentium multiplier,” Scott says. “Today on the faster hardware and running parallel it would take one hour.”

(return to News page)


HPEMS Project statement

Xyce™ is a part of the larger High Performance Electrical Modeling and Simulation (HPEMS) project at Sandia. It has the following statement:

With the elimination of underground nuclear testing and declining defense budgets, science-based stockpile stewardship requires increased reliance on high performance modeling and simulation of weapon systems. Electrical systems and components are major elements in today’s weapon systems. The present electrical modeling and simulation capabilities are very limited and will be significantly expanded by using massively parallel computational resources. Our vision is to accurately characterize nuclear weapon electrical systems from first principles in all environments over a 50-year lifetime. The goal of this project is to provide the tools that will allow the use of massively parallel modeling and simulation techniques on high-performance computers in existing and future nuclear weapon electrical systems models.

Steve Wix (1734), the project leader for HPEMS, has taken a temporary assignment at DOE. Carolyn Bogdan (1734) is taking over his responsibilities for HPEMS.

Xyce team members

The following team members have all contributed to the development of Xyce™:

Becky Arnold (6536), Carolyn Bogdan (1734), Steve Brandon (8205), Todd Coffey (9214), David Day (9214), Ray Heath (1734), Mike Heroux (9214), Scott Hutchinson (9233), Rob Hoekstra (9233), Eric Keiter (9233), Ken Marx (8205), Tamara Kolda (8962), Roger Pawlowski (9233), Eric Rankin (9233), Thomas Russo (1734), Dave Shirley (9328), Smitha Sam (6536), Regina Schells (1734), Michael Williamson (6536), Lon Waters (1734), Steve Wix (1734) and Edna Wong (9233).

(return to News page)


Sandia’s ‘being there’ visual hardware enhances long-distance collaborations

Huge image data sets examined interactively yet remotely

If a surgeon in New York wants the opinion quickly of a specialist in Cairo, she probably would send medical X-ray or MRI files as e-mail attachments or make them accessible in Internet drop zones.

But jointly viewing and interacting with the images — a more effective way to discuss problems — currently takes minutes for each turn of a visualization. This could be too time-consuming to help a patient on the operating table. In less extreme cases, with medical specialists being paid by the clock, the time delays during extensive consultations could soon lead, as the late Senator Everett Dirksen put it, to real money.

Now a team of Sandia engineers has applied for a patent on interactive remote visualization hardware that will allow doctors (or engineers, or oil exploration teams, or anyone else with a need to interact with computer-generated images from remote locations) to view and manipulate images as though standing in the same room. The lag time between action and visible result is under 0.1 second even though the remote computer is thousands of miles away.

“The niche for this product is when the data set you’re trying to visualize is so large you can’t move it, and yet you want to be collaborative, to share it without sending copies to separate locations,” says Sandia team leader Lyndon Pierson (9336). “We expect our method will interest oil companies, universities, the military — anywhere people have huge quantities of visualization data to transmit and be jointly studied.”

He adds, “Significant commercial interest [in the new device] has been demonstrated by multiple companies.”

The Sandia hardware leverages without shame the advances in 3-D commercial rendering technology “in order not to re-invent the wheel,” says Perry Robertson (1751).

Graphics cards for video games have extraordinary 2-D and even 3-D rendering capabilities but exercise them only inside the cards. These images are then fed to the nearby monitor — a cozy arrangement that does not solve the problem of how to plug visuals formatted for 60 images a second into a network, says Perry.

Fortunately, the Sandia extension hardware looks electronically just like a monitor to the graphics card, says Perry. “So, to move an image across the Internet, as a first step our device grabs the image.”

The patented Sandia hardware squeezes the video data flooding in at nearly 2.5 gigabits a second into a network pipe that carries less than 0.5 gigabits/sec.

“While compression is not hard, it’s hard to do fast. And it has to be interactive, which streaming video typically is not,” says Lyndon.

The Sandia compression minimizes data loss to ensure image fidelity. “Users need to be sure that the things they see on the screen are real, and not some artifact of image compression,” says Lyndon.

The group knew that a hardware solution was necessary to keep up with the incoming video stream. “Without it, the receiver’s frame rate would be unacceptably slow,” says Perry. “We wanted the user to experience sitting right at the supercomputer from thousands of miles away.”

“In an attempt to reduce the need for additional hardware,” says John Eldridge (9336), who wrote the software applications, “we also created software versions of the encoder and decoder units for testing purposes. However, there is only so much you can do in software at these high resolutions and frame rates.”

The custom-built apparatus has two boards — one for compression, the other for expansion. The boards use standard low-cost SDRAM memory, like that found in most PCs, for video buffers. Four reprogrammable logic chips do the main body of work. A single-board PC running Linux is used for supervisory operations. “We turned to Linux because of its networking support and ease of use,” says Ron Olsberg (9336), project engineer.

“We built this apparatus for very complex ASCI visualizations. If we could have bought it off the shelf, we would have,” says Perry.

Funded by ASCI's [Advanced Scientific Computing Initiative] Problem-Solving Environment, a pair of boards cost about $25,000, but are expected to cost much less when commercially available. A successful demonstration took place in late October between Chicago and the Amsterdam Technology Center in the Netherlands.

A second demonstration occurred between Sandia locations in Albuquerque and Livermore and the show floor of the Supercomputing 2002 convention in Baltimore in November. “Now that this technology is out there, we expect other applications will begin to take advantage of it,” says Lyndon. “Their experiences and improvements will eventually feed back into US military capability.”

In addition to Perry, John, Lyndon, and Ron, the design team also included Karl Gass (1751) and Tom Pratt and Edward Witzke (both 9336).

(return to News page)


Sandia’s Salinas code shares Gordon Bell computing award

Making it to the finals of this year’s Gordon Bell Awards was a great achievement for the creators of Sandia’s supercomputer program Salinas. Only 38 entries worldwide had been deemed acceptable for judging by a committee led by Caltech’s Thomas Sterling, and not only had Salinas been accepted but by Nov. 20 only five competitors to the Sandia-led program remained. Naturally, the Sandia team wanted to go all the way. Final judgment would be announced that day in the Baltimore Convention Center as the grand finale to Supercomputing 2002.

The contest, as Mike McGlaun (9140) characterized it, was “the Superbowl of supercomputing.”

The Salinas program had tested out tops at 1.16 teraflops/sec. The practical, widely used program simulates the stresses on aircraft carriers and buildings, as well as on reentry vehicles and certain aspects of the nuclear stockpile. It was the first fully ASCI program to be a Gordon Bell Award contender. And, as team member Manoj Bhardwaj (9100) pointed out, the program’s speed was proportional — 22 to 25 percent — to that of the machine it ran on, which had been ASCI White. Run Salinas on a faster computer, he said, and it was possible the workhouse program would run still faster.

Still, its four Japanese competitors on the Earth Simulator — far and away the world’s fastest computer — had run simulation programs of 29.5, 26.58, 16.4, and 14.9 Tflops/second, and these were actual simulations, respectively, of planetesimals in the region between Uranus and Neptune, global atmospheric conditions, turbulence, and 3D fluid modeling with direct rele-vance to nuclear fusion.

The sixth competitor, the University of Illinois at Urbana-Champaign, made no speed claims but had achieved biomolecular simulations using thousands of processors — the “in” thing for today’s times.

So things looked pretty bleak for the Salinas nine that day. (They were, in addition to Manoj, (all in Center 9100) Kendall Pierson, Garth Reese (team leader who began the code), Tim Walsh, David Day, Ken Alvin, and James Peery (who managed development of the code before migrating to Los Alamos National Laboratory), collaborating with Charbel Farhat and Michel Lesoinne, both of the University of Colorado.

Light beams from a purple and a violet floodlight heightened the tension by casting lines of color on the folds of the black curtain behind Sterling as, on the stage of the convention’s huge ballroom, he began his final approach to naming the winners of the award.

There was no visible nervousness among the Sandians in the audience as Sterling characterized the contest as having “a watershed year” and said, “In any other year, any of the finalists would have won the first-place award.”

But when the announcements had been all made and the tension lifted, three Japanese contestants (minus the planetesimal entry) and the University of Illinois were on first and second base, with Sandia hugging third. All had won in different categories. Sandia was listed as “special award.”

A first for ‘true engineering code’

Said Center for Engineering Sciences Center Director Tom Bickel (9100), “This is the first time a true engineering code has won the Gordon Bell award.”

“Salinas is the first full-featured production application with an ASCI pedigree,” said Mike Vahle (9900). “Its performance and efficiency on the ASCI platforms is impressive.”

Tom praised the other winning entries but compared Sandia’s Salinas entry to the Japanese programs as “the difference between doing one step in the solution very fast versus solving the entire problem” — that is, the Sandia program was far more extensive. “Salinas is already having impact on Sandia’s nuclear weapon mission.”

Kendall estimated the number of lines of code in Salinas at 140,000, while each Japanese code was perhaps 15,000.

Salinas — a massively parallel structural dynamics code — simulates the response of a structure under various loads and also predicts the natural frequencies of a structure under varying stress.

Said Manoj, “This tool can help aid the designer in creating real-world structures that don’t fail under the environments in which they must live.” Most commercial codes can’t do that; if they do, they take days, he said. The Salinas program is capable of solving 100 million equations simultaneously. Compare this, he suggests, with solving for three unknowns in high school math class.

The $5,000 prize money provided by Gordon Bell was split evenly between the five winners. Bell is a still-active scientist, currently at Microsoft, who, as his web site puts it, “has long evangelized for scalable processors.”

Tom says he intends to put the money in the ASCI program account and use it to make plaques and provide small awards for each member of the winning team.

This is the third time Sandia has won the Gordon Bell award. Sandia first won the award in 1988. Winners were Sandians Robert Benner, John Gustafson, and Gary Montry for achieving unprecedented speedups in parallel processing. The amount Gordon provided at that time, he said in a communication to the Lab News, was $2,500. A team led by David Womble won the award in 1994 for an algorithm that aided oil exploration in pockets deep underground. Other members of that team were David Greenberg, Stephen Wheat, Robert Benner (all Sandians), Marc Ingber of the University of New Mexico, and Greg Henry and Satya Gupta of Intel.

In 1998, Sandian Mark Sears took second prize with Ken Stanley of UC Berkeley and Greg Henry of Intel.

(return to News page)

 

Feedback