Drake Hotel, Chicago, IL USA
July 28-31, 1998
The IEEE International Symposium on High Performance Distributed Computing (HPDC) provides a forum for presenting the latest research findings that unify parallel and distributed computing. In HPDC environments, parallel or distributed computing techniques are applied to the solution of computationally intensive applications across networks of computers.
Sponsors:
In Cooperation with:
Full Day Tutorial: (9:00 a.m. - 4:30 p.m.)
Tutorial 1: | How to build a Beowulf: Assembling, Programming, and Using a Clustered PC - Do-it-yourself Supercomputer Thomas Sterling, California Institute of Technology |
Morning Half-Day Tutorials (8:30 a.m. - 12:00 p.m.)
Tutorial 2: |
Collaborative Visualization in Distributed Virtual Environments Jason Leigh, NCSA and Electronic Visualization Laboratory Andrew E. Johnson, University of Illinois at Chicago |
Tutorial 3: | High Performance Computing with Legion Andrew Grimshaw, University of Virginia |
Tutorial 4: | Introduction to Performance issues in Using MPI for Communication and I/O William Gropp, Rusty Lusk, Rajeev Thakur, Argonne National Laboratory |
Tutorial 5: | The Globus Grid Programming Toolkit Gregor von Laszewski, Argonne National Laboratory Steven Fitzgerald, Information Sciences Institute of the University of Southern California |
8:00 a.m. - 10:00 a.m. | Registration |
8:30 a.m. - 10:00 a.m. |
KEYNOTE SPEECH: How Distributed Computing Changes Science Larry Smarr, National Center for Supercomputing Applications |
SESSION 1: COMMUNICATIONS, Chair: Doug Schmidt, WUStL
SESSION 2: APPLICATIONS, Chair: Adam Ferrari, University of Virginia
1:30 p.m. - 2:30 p.m. | PANEL: BUILDING THE GRID Moderator/Organizer: Ian Foster, Argonne National Laboratory Panelists: To be Announced. |
This panel provides an opportunity for discussion of issues relating to the construction of a national-scale "grid" providing efficient and uniform access to high-end resources. The panelists are all participants in the July 1998 "Grids 98" workshop at which these issues were discussed.
3:00 p.m. - 5:00 p.m. | CONCURRENT SESSIONS (3, 4) |
9:00 a.m. - 10:00 a.m. |
KEYNOTE SPEECH: Peering into the Future Rick Rashid, Microsoft |
SESSION 5: DISTRIBUTED COMPUTING, Chair: Fran Berman, University of California, San Diego
SESSION 6: I/O, Chair: Bill Gropp, Argonne National Laboratory
1:30 p.m. - 2:30 p.m. | PANEL: HPDC EXPERIENCES Moderator/Organizer: David Culler, University of California, Berkeley Panelists: To be Announced. |
This panel will bring together people with practical experience in building large-scale high-performance distributed computing systems and applications. Discussion will focus on the lessons learned.
3:00 p.m. - 4:30 p.m. | CONCURRENT SESSIONS (7, 8) |
8:30 a.m. - 10:00 a.m. | CONCURRENT SESSIONS (9, 10) |
SESSION 11: APPLICATIONS AND SYSTEMS, Chair: To Be Announced
Wednesday Keynote Speech: How Distributed Computing Changes Science Dr. Smarr has been one of the pioneers in creating a national information
infrastructure to support academic research, governmental functions, and
industrial competitiveness. In 1985, Dr. Smarr became the Director of the
National Center for Supercomputing Applications (NCSA) at the University of
Illinois at Urbana-Champaign (UIUC). In 1997, he became Director of the
National Computational Science Alliance. Dr. Smarr has conducted
observational, theoretical, and computational based astrophysical sciences
research for fifteen years. In 1995 he was elected to the National Academy
of Engineering.
Thursday Keynote Speech: Peering into the Future
Abstract:
Bio:
It has recently become possible to assemble a collection of commodity
mass market hardware components and freely available software packages in
a day and be executing real world applications by dinner time to achieve
a sustained performance at greater than 1 Gflops at a total cost of
around $50,000. Furthermore, on almost a daily basis, these numbers are
improving. This full-day tutorial will cover all aspects of system assembly,
integration, software installation, programming, application development,
system management, and benchmarking.
Demonstrations with actual hardware and software components will be
conducted throughout the tutorial. Participants will be encouraged to
closely examine and manipulate elements of a Beowulf at various stages of
integration with strong Q&A interaction between presenters and attendees.
Dr. Thomas Sterling has been engaged in research related to parallel
computer architecture, system software, and evaluation for more than
a decade. He was a key contributor to the design, implementation, and testing
of several experimental parallel architectures.
The focus of Dr. Sterling's research has been on the modeling and evaluation
of performance factors determining scalability of high performance computing
systems. Upon completion of his Ph.D. as a Hertz Fellow from MIT in 1984,
Dr. Sterling served as a research scientist at Harris Corporation's Advanced
Technology Department, and later with the systems group of the IDA
Supercomputing Research Center. In 1992, Dr. Sterling joined the USRA
Center for Excellence in Space Data and Information Sciences to support
the NASA HPCC earth and space sciences project at the Goddard Space Flight
Center. Dr. Sterling was Adjunct Associate Professor at the University of
Maryland College Park, where he lectured on computer architecture. He holds
six patents, is the co-author of two books and has published dozens of papers
in the field of parallel computing. Dr. Thomas Sterling is currently Senior
Staff Scientist, High Performance Computing Systems Group, Jet Propulsion
Laboratory; and Visiting Associate, Center for Advanced Computing Research,
California Institute of Technology.
Thomas Sterling Tele-Immersion is the unification of collaborative virtual reality (VR) and
video conferencing in the context of significant computation and data-mining.
The goal of Tele-Immersion is to use the latest in visualization, networking
and database technology to allow domain scientists to collaboratively steer
scientific computations, query enormous raw and derived data-sets, and
visualize their results in a seamless virtual environment.
This course will provide:
This course is targeted at: domain scientists and engineers who are
interested in learning how to apply collaborative virtual reality technology
in their areas of research; and at HPDC application developers who are
interested in creating and deploying such technologies.
Jason Leigh is a senior scientist with a joint appointment at the National
Center for Supercomputing Applications and the Electronic Visualization
Laboratory (EVL) at the University of Illinois at Chicago, where his main
research focus is collaborative virtual reality (or Tele-Immersion).
Jason's background in human-factors, interactive computer graphics, networking,
databases, and art allows him to approach the development of techniques and
tools for Tele-Immersion from multiple perspectives. Jason is assisting
General Motors and Hughes Research in applying collaborative technologies to
their VisualEyes VR vehicle design system. He is working as a member of the
NICE project to develop VR collaborative educational environments for children.
Finally Jason is leading the research and development of a software
architecture (called CAVERNsoft) that integrates networking and databases
in a manner that is optimized for Tele-Immersion.
Jason Leigh Email: jleigh@eecs.uic.edu
Andrew E. Johnson, PhD, is a faculty member of the Electronic Visualization
Laboratory and an Assistant Professor in the Electrical Engineering and
Computer Science Department at the University of Illinois at Chicago. His
current research interests focus on "tele-immersion", collaboration in
immersive virtual environments, working on the CAVERNsoft project. As
a continuation of his work on the NICE project, a collaborative virtual
reality learning environment for young children, he is currently working on
an NSF-funded study of deep learning and visualization technologies,
investigating how collaborative virtual reality can be used to help teach
concepts that are counter-intuitive to a learner's current mental model.
Developed at the University of Virginia, Legion is an integrated software
system for distributed parallel computation. While fully supporting existing
codes written in MPI and PVM, Legion provides features and services that allow
users to take advantage of much larger, more complex resource pools. With
Legion, for example, a user can easily run a computation on a supercomputer
at a national center while dynamically visualizing the results on a local
machine. As another example, Legion makes it trivial to schedule and run
a large parameter space study on several workstation farms simultaneously.
Legion permits computational scientists to use cycles wherever they are,
allowing bigger jobs to run in shorter times through higher degrees of
parallelization.
Key capabilities include the following:
These features also make Legion attractive to administrators looking for
ways to increase and simplify the use of shared high-performance machines.
The Legion implementation emphasizes extensibility, and multiple policies for
resource use can be embedded in a single Legion system that spans multiple
resources or even administrative domains.
This tutorial will provide background on the Legion system and teach how to
run existing parallel codes within the Legion environment. The target
audience is supercomputing experts who help scientists and other users get
their codes parallelized and running on high performance systems.
Andrew S. Grimshaw is an Associate Professor of Computer Science and
Director of the Institute of Parallel Computation at the University
of Virginia. His research interests include high-performance parallel
computing, heterogeneous parallel computing, compilers for parallel systems,
operating systems, and high-performance parallel I/O. He is the chief designer
and architect of Mentat and Legion. Grimshaw received his M.S. and Ph.D. from
the University of Illinois at Urbana-Champaign in 1986 and 1988 respectively.
Andrew Grimshaw
(804) 982-2204
MPI is now widely accepted as a standard for message-passing parallel
computing libraries. MPI-2, released in June 1997, adds additional
capabilities to MPI, including remote-memory access and parallel I/O.
The richness of MPI provides many ways to express an operation, such as
exchanging messages for a grid-based computation or writing out a distributed
array to a parallel file system. Choosing the best way requires both
an understanding of the MPI approach to high performance and the capabilities
of particular implementations of MPI.
This tutorial will discuss performance-critical issues in message-passing
programs, explain how to examine the performance of an application using
MPI-oriented tools, and show how the features of MPI can be used to attain
peak application performance. It will be assumed that attendees have an
understanding of the basic elements of the MPI specification. Experience
with message-passing parallel applications will be helpful but not required.
William Gropp is a senior computer scientist in the Mathematics and Computer
Science Division at Argonne National Laboratory. After receiving his Ph.D.
in Computer Science from Stanford University in 1982, he held the positions
of Assistant (1982-1988) and Associate (1988-1990) Professor in the Computer
Science Department of Yale University. In 1990, he joined the Numerical
Analysis group at Argonne. His research interests are in parallel computing,
software for scientific computing, and numerical methods for partial
differential equations. He is a co-author of "Using MPI: Portable Parallel
Programming with the Message-Passing Interface" and is a chapter author in
the MPI-2 Forum. His current projects include the design and implementation
of MPICH, a portable implementation of the MPI Message-Passing Standard, the
design and implementation of PETSc, a parallel, numerical library for PDEs,
and research into programming models for parallel architectures.
Ewing ("Rusty") Lusk is a senior computer scientist in the Mathematics and
Computer Science Division at Argonne National Laboratory. After receiving
his Ph.D. in mathematics at the University of Maryland in 1970, he served
first in the Mathematics Department and later in the Computer Science
Department at Northern Illinois University before joining Argonne National
Laboratory in 1982. He has been involved in the MPI standardization effort
both as an organizer of the MPI-2 Forum and as a designer and implementor of
the MPICH portable implementation of the MPI Standard. His current projects
include design and implementation of the MPI-2 extensions to MPICH and
research into programming models for parallel architectures. Past interests
include automated theorem-proving, logic programming, and parallel computing.
He is a co-author of several books in automated reasoning and parallel
computing, including "Using MPI: Portable Parallel Programming with the
Message-Passing Interface". He is the author of more than eighty research
articles in mathematics, automated deduction, and parallel computing.
Rajeev Thakur is an assistant computer scientist in the Mathematics
and Computer Science Division at Argonne National Laboratory. He received
a Ph.D. in Computer Engineering from Syracuse University in 1995. He is
actively engaged in parallel I/O research, particularly in implementing
portable parallel I/O interfaces and I/O characterization of parallel
applications. He participated in the MPI-2 Forum to define a standard,
portable interface for parallel I/O (MPI-IO). He is currently developing a
high-performance, portable MPI-IO implementation called ROMIO.
Rajeev Thakur
This tutorial is a introduction to the capabilities of the Globus grid
programming toolkit. Computational grids promise to enable a wide range
of emerging application concepts such as remote computing, distributed
supercomputing, tele-immersion, smart instruments, and data mining. However,
the development and use of such applications is in practice difficult and
time consuming, because of the need to deal with complex and highly
heterogeneous systems. The Globus grid programming toolkit is designed to
help application developers and tool builders overcome these obstacles to
the construction of "grid-enabled" scientific and engineering applications.
It does this by providing a set of standard services for authentication,
resource location, resource allocation, configuration, communication, file
access, fault detection, and executable management. These services can be
incorporated into applications and/or programming tools in a "mix-and-match"
fashion to provide access to needed capabilities.
Our goal in this tutorial is both to introduce the capabilities of the
Globus toolkit and to help attendees apply Globus services to their own
applications. Hence, we will structure the tutorial as a combination of
Globus system description and application examples.
Dr. Gregor von Laszewski obtained his Ph.D. in Computer Science from
Syracuse University and is currently a researcher in the Mathematics
and Computer Science Division at Argonne National Laboratory. He has been
involved in the development of many Globus components and applications, and
leads an effort to apply Globus services to the real-time analysis of data
from scientific instruments.
Gregor von Laszewski
Dr. Steven Fitzgerald received his D.Sc. in Computer Science from the
University of Massachusetts Lowell. He is a researcher at the Information
Sciences Institute of the University of Southern California and holds a
faculty position at California State University Northridge.
Steve's involvement in the Globus project has focused on Globus's information
services, and the creation and deployment of the GUSTO testbed.
Larry Smarr
National Center for Supercomputing Applications
Dr. Richard F. Rashid
Vice President, Research
Microsoft Corporation
The relentless pace of progress in hardware and software technology
will dramatically change computing over the next 10 years. Software
technologies once thought "esoteric" such as natural language processing,
Bayesian reasoning, computer vision and speech will dramatically affect not
only the way humans and computers interact, but also the way humans interact
with each other. Moreover, the fundamental relationships between software
and hardware will significantly change as software objects become more
dynamic and operating systems increase the level of abstraction provided to
developers. This talk addresses these coming changes and discusses how they
will effect the uses of computing and the kinds of software our industry
will be developing in the future.
Dr. Rashid joined Microsoft in the fall of 1991 as it's first
Director of Research and now holds the title of Vice President of Research.
He received Bachelor's degrees in Mathematics and Comparative Literature
from Stanford University in 1974 and he received his PhD in Computer Science
from the University of Rochester in 1980. He was a Professor at Carnegie
Mellon University for 12 years and was best known during that time for the
creation of the Mach operating system. He has published papers in a number
of areas of computer science including computer vision, AI, programming
languages, data compression, networking, distributed and parallel operating
system design. He is well known for his interest in computer gaming and as a
graduate student implemented (with Gene Ball) Alto Trek -- the first
real-time network space game for the Xerox Alto. Among other odd facts, he
is the inventor of the term NUMA (non-uniform memory architecture), is often
credited with the term "microkernel", is an avid fan of Babylon 5 and
continues to write nearly 30,000 lines of code a year on various projects.
Thomas Sterling,
California Institute of Technology
Caltech/JPL
Mail Code 158-79
1200 E. California Blvd.
Pasadena, CA 91125
Phone: (626)-395-3901
Fax: (626)-584-5917
Email: tron@cacr.caltech.edu
TUTORIAL 2: Collaborative Visualization in Distributed Virtual Environments
Jason Leigh,
NCSA and Electronic Visualization Laboratory
Andrew E. Johnson, University of Illinois at Chicago
Half-day Tutorial - Morning July 28, 1998
Electronic Visualization Lab (M/C 154)
University of Illinois at Chicago
851 S. Morgan St. Room 1120 SEO
Chicago, IL 60607-7053
Phone: (312) 996-3002
Fax: (312) 413-7585
TUTORIAL 3: High Performance Computing with Legion
Andrew Grimshaw
University of Virginia
Half-day Tutorial - Morning July 28, 1998
Department of Computer Science
University of Virginia
Charlottesville, VA 22903
fax: (804) 982-2214
grimshaw@Virginia.edu
TUTORIAL 4: Introduction to Performance Issues in Using MPI
for Communication and I/O
William Gropp, Rusty Lusk, Rajeev Thakur
Argonne National Laboratory
Half-day Tutorial - Afternoon July 28, 1998
Argonne National Laboratory
Building 221, Room C-247
Argonne, IL, 60439
Email: thakur@mcs.anl.gov
Phone: (630) 252-1682
Fax: (630) 252-5986
TUTORIAL 5: The Globus Grid Programming Toolkit
Gregor von Laszewski, Argonne National Laboratory
Steven Fitzgerald, Information Sciences Institute of the
University of Southern California
Half-day Tutorial - Afternoon July 28, 1998
Argonne National Laboratory
Building 221, Room A
Argonne, IL, 60439
Email: gregor@mcs.anl.gov
Phone: (630) 252-0472
Fax: (630) 252-5986