HPDC-7 FINAL PROGRAM

SEVENTH IEEE INTERNATIONAL SYMPOSIUM ON
HIGH-PERFORMANCE DISTRIBUTED COMPUTING (HPDC-7)

Drake Hotel, Chicago, IL USA
July 28-31, 1998

The IEEE International Symposium on High Performance Distributed Computing (HPDC) provides a forum for presenting the latest research findings that unify parallel and distributed computing. In HPDC environments, parallel or distributed computing techniques are applied to the solution of computationally intensive applications across networks of computers.

Sponsors:

In Cooperation with:


Pre-Symposium Tutorials
Tuesday, July 28, 1998

Full Day Tutorial: (9:00 a.m. - 4:30 p.m.)

Tutorial 1:How to build a Beowulf: Assembling, Programming, and Using a Clustered PC - Do-it-yourself Supercomputer
Thomas Sterling, California Institute of Technology

Morning Half-Day Tutorials (8:30 a.m. - 12:00 p.m.)

Tutorial 2: Collaborative Visualization in Distributed Virtual Environments
Jason Leigh, NCSA and Electronic Visualization Laboratory
Andrew E. Johnson, University of Illinois at Chicago
Tutorial 3:High Performance Computing with Legion
Andrew Grimshaw, University of Virginia

Afternoon Half-Day Tutorials (1:30 p.m. - 5:00 p.m.)

Tutorial 4:Introduction to Performance issues in Using MPI for Communication and I/O
William Gropp, Rusty Lusk, Rajeev Thakur, Argonne National Laboratory
Tutorial 5:The Globus Grid Programming Toolkit
Gregor von Laszewski, Argonne National Laboratory
Steven Fitzgerald, Information Sciences Institute of the University of Southern California

7:00 p.m. - 9:00 p.m. Evening Reception and Registration


Wednesday, July 29, 1998

8:00 a.m. - 10:00 a.m.Registration

8:30 a.m. - 10:00 a.m. KEYNOTE SPEECH: How Distributed Computing Changes Science
Larry Smarr, National Center for Supercomputing Applications
10:00 a.m. - 10:30 a.m.BREAK

10:30 a.m. - 12:00 p.m.CONCURRENT SESSIONS (1, 2)

SESSION 1: COMMUNICATIONS, Chair: Doug Schmidt, WUStL

  1. Adaptive Utilization of Communication and Computational Resources in High Performance Distribution Systems: The EMOP Approach
    Shridhar Diwan, Dennis Gannon, Indiana University
  2. Efficient Layering for High Speed Communication: Fast Messages 2.x
    M. Lauria, S. Pakin, Andrew Chien, University of Illinois, Urbana-Champaign
  3. The Software Architecture of a Distributed Quality of Session Control Layer
    Alaa Youssef, Hussein Abdel-Wahab, Kurt Maly, Old Dominion University

SESSION 2: APPLICATIONS, Chair: Adam Ferrari, University of Virginia

  1. TeleMed: Wide-Area, Secure, Collaborative Object Computing with Java and COBRA for Healthcare
    David W. Forslund, James E. George, Eugene M. Garilov, Los Alamos National Laboraratory
  2. Optimization protocol parameters to large Scale PC Cluster and Evaluation of Its Effectiveness with Parallel Data Mining
    Masato Oguchi, Takahiho Shintani, Takayuki Tamura, Masaru Kitsuregawa, University of Tokyo
  3. Computing Twin Primes and Brun's Constant: A Distributed Approach
    Patrick Fry, Jeffrey Nesheiwat, and Boleslaw Szymanski, Rensselaer Polytechnic Institute

12:00 noon - 1:30 p.m. - LUNCH

1:30 p.m. - 2:30 p.m.PANEL: BUILDING THE GRID
Moderator/Organizer: Ian Foster, Argonne National Laboratory
Panelists: To be Announced.

This panel provides an opportunity for discussion of issues relating to the construction of a national-scale "grid" providing efficient and uniform access to high-end resources. The panelists are all participants in the July 1998 "Grids 98" workshop at which these issues were discussed.


2:30 p.m. - 3:00 p.m. Break

3:00 p.m. - 5:00 p.m.CONCURRENT SESSIONS (3, 4)
SESSION 3: METACOMPUTING, Chair: Cliff Neuman, University of Southern California

  1. WebOS: Operating System Services for Wide Area Applications
    Amin Vahdat, Thomas Anderson, Michael Dahlin, E. Belani, D. Culler, P. Easthman, and C. Yoshikawa, University of California, Berkeley, University of Washington
  2. The DOGMA Approach to High-Utilization Supercomputing
    Glenn Judd, Mark Clement, Quinn Snell, Brigham Young University
  3. On the Design of a Demand-Based Network-Computing System: The Purdue University Network Computing Hub
    Nirav H. Kapadia and Jose' A.B. Fortes, Purdue University
  4. Application Experiences with the Globus Toolkit
    Sharon Brunett, Karl Czajkowski, Ian Foster, Andy Johnson, Carl Kesselman, Jason Leigh, and Steven Tuecke, Argonne National Lab., Caltech, University of Illinois, Chicago, University of Southern California
SESSION 4: SYSTEMS, Chair: Rob Armstrong, Sandia National Laboratory

  1. Strings:A High Performance Distributed Shared Memory for Symmetrical Multiprocessor Clusters
    Sumit Roy, Vipin Chaudhary, Wayne State University
  2. Two-Stage Transaction Processing in Client-Server DBMs
    Vinay Kanitkar, Alex Delis, Polytechnic University
  3. Hectiling: An Integration of Fine and Coarse-Grained Load-Balancing Strategies
    Samuel H. Russ, Ioana Banicescu, Skeikh Ghafoor, Bharathi Janapareddi, Jonathan Robinson, Rong Lu, Mississippi State University
  4. Otter: Bridging the Gap between MATLAB and ScaLAPACK,
    Michael J. Quinn, Alexey Malishevsky, Nagajagade Seelam, Oregon State University, Otter

5:00 p.m. - 5:30 p.m. Break


5:30 p.m. - 7:30 p.m. Posters, Demos and Reception

Thursday, July 30, 1998

9:00 a.m. - 10:00 a.m. KEYNOTE SPEECH: Peering into the Future
Rick Rashid, Microsoft
10:00 a.m. - 10:30 a.m.BREAK

10:30 a.m. - 12:00 p.m.CONCURRENT SESSIONS (5, 6)

SESSION 5: DISTRIBUTED COMPUTING, Chair: Fran Berman, University of California, San Diego

  1. On the Effectiveness of Distributed Checkpoint Algorithms, For Domino-Free Recovery
    Franco Zambonelli, University of Modena
  2. Authorization for Metacomputing Applications
    Grig Gheorghiu, T. Ryutov, Clifford Neuman, University of Southern California
  3. Matchmaking: Distributed Resource Management for High Throughput Computing
    Rajesh Raman, Miron Livny, and Marvin Solomon, University of Wisconsin

SESSION 6: I/O, Chair: Bill Gropp, Argonne National Laboratory

  1. Distant I/O: One-Sided Access to Secondary Storage on Remote Processors
    J. Nieplocha, I. Foster, H. Dachsel, Argonne National Laboratory, Pacific Northwest National Laboratory
  2. Automatic Parallel I/O Performance Optimization Using Genetic Algorithms
    Ying Chen, Marianne Winslett, Y. Cho, S. Kuo, University of Illinois, Urbana-Champaign
  3. Parallel I/O Performance of Fine Grained Data Distributions
    Yong Cho, Marianne Winslett, Ying Chen, Szu-wen Kuo, University of Illinois, Urbana-Champaign

12:00 noon - 1:30 p.m. - LUNCH

1:30 p.m. - 2:30 p.m.PANEL: HPDC EXPERIENCES
Moderator/Organizer: David Culler, University of California, Berkeley
Panelists: To be Announced.

This panel will bring together people with practical experience in building large-scale high-performance distributed computing systems and applications. Discussion will focus on the lessons learned.


2:30 p.m. - 3:00 p.m. Break

3:00 p.m. - 4:30 p.m.CONCURRENT SESSIONS (7, 8)
SESSION 7: ADAPTIVITY, Chair: Michael Quinn, Oregon State University

  1. Autopilot: Adaptive Control of Distributed Applications
    Randy Ribler, Jeffrey Vetter, Huseyin Simitci, Daniel Reed, University of Illinois, Urbana-Champaign
  2. Prediction and Adaptation in Active Harmony
    Jeffrey Hollingsworth, Peter Keleher, University of Maryland
  3. A Resource Query Interface for Network-Aware Applications
    Bruce Lowekamp, Nancy Miller, Dean Sutherland, Thomas Gross, Peter Steenkiste, Jaspal Subhlok, Carnegie Mellon University
SESSION 8: INTERACTIVE SYSTEMS, Chair: Bruce Berra, Wright State University

  1. Personal Tele-Immersion Devices
    Tom De Fanti, Dan Sandin, Greg Dawe, Maxine Brown, Maggie Rawlings, Gary Lindahl, A. Johnson, and J. Leigh, University of Illinois, Chicago
  2. A Framework for Interacting with Distributed Programs and Data
    Steven Hackstadt, Christopher Harrop, Allen Maloney, University of Oregon
  3. Efficient Coupling of parallel Applications Using PAWS
    Peter H. Beckman, Patricia K. Fasel, William F. Humphrey, Susan M. Mniszewski, Los Alamos National Laboratory

6:30 p.m. - 9:30 p.m. DINNER CRUISE

Friday, July 31, 1998

8:30 a.m. - 10:00 a.m.CONCURRENT SESSIONS (9, 10)
SESSION 9: DIGITAL LIBRARIES AND DATABASES, Chair: Reagan Moore, SDSC

  1. High-Performance Distributed Digital Libraries: Building the Interspace for the Grid
    Bruce Schatz, University of Illinois, Urbana-Champaign
  2. Adaptive Load Sharing for Clustered Digital Library Servers
    Huican Zhu, Tao Yang, Qi Zheng, David Watson, Oscar Ibarra, Terry Smith, University of Californis, Santa Barbara
  3. Cooperative Caching of Dynamic Content on a Distributed Web Server
    Vegard Holmedahl, Ben Smith, and Tao Yang, University of California, Santa Barbara
SESSION 10: INFRASTRUCTURE, Chair: Charlie Catlett, NCSA

  1. Global High Performance Networking: Connecting the vBNS and the Asia-Pacific Advanced Network for Research and Education Applications
    Michael A. McRobbie, Karen H. Adams, Dennis B. Gannon, Donald F. McMullen, Douglas D. Pearson, R. Allen Robel, Steven S. Wallace, James G. Williams, Indiana University
  2. The NetLogger Methodology for High Performance Distributed Systems Performance Analysis
    Brian Tierney, William Johnston, B. Crowley, Gary Hoo, C. Brooks, and D. Gunter, Lawrence Berkeley National Laboratory
  3. Monitoring Health & Status in a Metacomputing Environment: The Globus Heartbeat Monitor
    Paul Stelling, Craig A. Lee, Carl Kesselman, Ian Foster, Gregor von Laszewski, The Aerospace Corporation, Argonne National Laboratory, University of Southern California
10:00 a.m. - 10:30 a.m.BREAK

10:30 a.m. - 12:00 p.m.CONCURRENT SESSIONS (11, 12)

SESSION 11: APPLICATIONS AND SYSTEMS, Chair: To Be Announced

  1. High-Speed, Wide Area, Data Intensive Computing: A Ten Year Retrospective
    Bill Johnston, Lawrence Berkeley National Laboratory
  2. Mesh Partitioning for Distributed Systems
    Jian Chen, Valerie Taylor, Northwestern University
  3. Towards a Hierarchical Scheduling System for Distributed WWW Server
    Daniel A. Andresen, Tim R. McCune, Kansas State University
SESSION 12: COMMUNICATIONS, Chair: Salim Hariri, Syracuse University

  1. Adaptive Data Communication Algorithms for Distributed Heterogeneous Systems
    Prashanth B. Bhat, Viktor K. Prasanna, C.S. Raghavendra, The Aerospace Corporation, University of Southern California
  2. Sender Coordination in the Distributed Virtual Communication Machine
    Marcel-Catalin Rosu, Karsten Schwan, Georgia Institute of Technology
  3. A Software Architecture for Global Address Space Communication on Clusters: Put/Get on Fast Messages
    Louis A. Gianni, Andrew A. Chien, University of Illinois, Urbana-Champaign

KEYNOTE SPEAKERS

Wednesday Keynote Speech: How Distributed Computing Changes Science
Larry Smarr
National Center for Supercomputing Applications

Dr. Smarr has been one of the pioneers in creating a national information infrastructure to support academic research, governmental functions, and industrial competitiveness. In 1985, Dr. Smarr became the Director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC). In 1997, he became Director of the National Computational Science Alliance. Dr. Smarr has conducted observational, theoretical, and computational based astrophysical sciences research for fifteen years. In 1995 he was elected to the National Academy of Engineering.


Thursday Keynote Speech: Peering into the Future
Dr. Richard F. Rashid
Vice President, Research
Microsoft Corporation

Abstract:
The relentless pace of progress in hardware and software technology will dramatically change computing over the next 10 years. Software technologies once thought "esoteric" such as natural language processing, Bayesian reasoning, computer vision and speech will dramatically affect not only the way humans and computers interact, but also the way humans interact with each other. Moreover, the fundamental relationships between software and hardware will significantly change as software objects become more dynamic and operating systems increase the level of abstraction provided to developers. This talk addresses these coming changes and discusses how they will effect the uses of computing and the kinds of software our industry will be developing in the future.

Bio:
Dr. Rashid joined Microsoft in the fall of 1991 as it's first Director of Research and now holds the title of Vice President of Research. He received Bachelor's degrees in Mathematics and Comparative Literature from Stanford University in 1974 and he received his PhD in Computer Science from the University of Rochester in 1980. He was a Professor at Carnegie Mellon University for 12 years and was best known during that time for the creation of the Mach operating system. He has published papers in a number of areas of computer science including computer vision, AI, programming languages, data compression, networking, distributed and parallel operating system design. He is well known for his interest in computer gaming and as a graduate student implemented (with Gene Ball) Alto Trek -- the first real-time network space game for the Xerox Alto. Among other odd facts, he is the inventor of the term NUMA (non-uniform memory architecture), is often credited with the term "microkernel", is an avid fan of Babylon 5 and continues to write nearly 30,000 lines of code a year on various projects.


TUTORIAL DESCRIPTIONS
TUTORIAL 1: How to build a Beowulf: Assembling, Programming, and Using a Clustered PC -- Do-it-yourself Supercomputer
Thomas Sterling,
California Institute of Technology

It has recently become possible to assemble a collection of commodity mass market hardware components and freely available software packages in a day and be executing real world applications by dinner time to achieve a sustained performance at greater than 1 Gflops at a total cost of around $50,000. Furthermore, on almost a daily basis, these numbers are improving. This full-day tutorial will cover all aspects of system assembly, integration, software installation, programming, application development, system management, and benchmarking.

Demonstrations with actual hardware and software components will be conducted throughout the tutorial. Participants will be encouraged to closely examine and manipulate elements of a Beowulf at various stages of integration with strong Q&A interaction between presenters and attendees.

Dr. Thomas Sterling has been engaged in research related to parallel computer architecture, system software, and evaluation for more than a decade. He was a key contributor to the design, implementation, and testing of several experimental parallel architectures.

The focus of Dr. Sterling's research has been on the modeling and evaluation of performance factors determining scalability of high performance computing systems. Upon completion of his Ph.D. as a Hertz Fellow from MIT in 1984, Dr. Sterling served as a research scientist at Harris Corporation's Advanced Technology Department, and later with the systems group of the IDA Supercomputing Research Center. In 1992, Dr. Sterling joined the USRA Center for Excellence in Space Data and Information Sciences to support the NASA HPCC earth and space sciences project at the Goddard Space Flight Center. Dr. Sterling was Adjunct Associate Professor at the University of Maryland College Park, where he lectured on computer architecture. He holds six patents, is the co-author of two books and has published dozens of papers in the field of parallel computing. Dr. Thomas Sterling is currently Senior Staff Scientist, High Performance Computing Systems Group, Jet Propulsion Laboratory; and Visiting Associate, Center for Advanced Computing Research, California Institute of Technology.

Thomas Sterling
Caltech/JPL
Mail Code 158-79
1200 E. California Blvd.
Pasadena, CA 91125
Phone: (626)-395-3901
Fax: (626)-584-5917
Email: tron@cacr.caltech.edu


TUTORIAL 2: Collaborative Visualization in Distributed Virtual Environments
Jason Leigh,
NCSA and Electronic Visualization Laboratory
Andrew E. Johnson, University of Illinois at Chicago
Half-day Tutorial - Morning July 28, 1998

Tele-Immersion is the unification of collaborative virtual reality (VR) and video conferencing in the context of significant computation and data-mining. The goal of Tele-Immersion is to use the latest in visualization, networking and database technology to allow domain scientists to collaboratively steer scientific computations, query enormous raw and derived data-sets, and visualize their results in a seamless virtual environment.

This course will provide:

This course is targeted at: domain scientists and engineers who are interested in learning how to apply collaborative virtual reality technology in their areas of research; and at HPDC application developers who are interested in creating and deploying such technologies.

Jason Leigh is a senior scientist with a joint appointment at the National Center for Supercomputing Applications and the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago, where his main research focus is collaborative virtual reality (or Tele-Immersion). Jason's background in human-factors, interactive computer graphics, networking, databases, and art allows him to approach the development of techniques and tools for Tele-Immersion from multiple perspectives. Jason is assisting General Motors and Hughes Research in applying collaborative technologies to their VisualEyes VR vehicle design system. He is working as a member of the NICE project to develop VR collaborative educational environments for children. Finally Jason is leading the research and development of a software architecture (called CAVERNsoft) that integrates networking and databases in a manner that is optimized for Tele-Immersion.

Jason Leigh
Electronic Visualization Lab (M/C 154)
University of Illinois at Chicago
851 S. Morgan St. Room 1120 SEO
Chicago, IL 60607-7053

Email: jleigh@eecs.uic.edu
Phone: (312) 996-3002
Fax: (312) 413-7585

Andrew E. Johnson, PhD, is a faculty member of the Electronic Visualization Laboratory and an Assistant Professor in the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago. His current research interests focus on "tele-immersion", collaboration in immersive virtual environments, working on the CAVERNsoft project. As a continuation of his work on the NICE project, a collaborative virtual reality learning environment for young children, he is currently working on an NSF-funded study of deep learning and visualization technologies, investigating how collaborative virtual reality can be used to help teach concepts that are counter-intuitive to a learner's current mental model.


TUTORIAL 3: High Performance Computing with Legion
Andrew Grimshaw
University of Virginia

Half-day Tutorial - Morning July 28, 1998

Developed at the University of Virginia, Legion is an integrated software system for distributed parallel computation. While fully supporting existing codes written in MPI and PVM, Legion provides features and services that allow users to take advantage of much larger, more complex resource pools. With Legion, for example, a user can easily run a computation on a supercomputer at a national center while dynamically visualizing the results on a local machine. As another example, Legion makes it trivial to schedule and run a large parameter space study on several workstation farms simultaneously. Legion permits computational scientists to use cycles wherever they are, allowing bigger jobs to run in shorter times through higher degrees of parallelization.

Key capabilities include the following:

These features also make Legion attractive to administrators looking for ways to increase and simplify the use of shared high-performance machines. The Legion implementation emphasizes extensibility, and multiple policies for resource use can be embedded in a single Legion system that spans multiple resources or even administrative domains.

This tutorial will provide background on the Legion system and teach how to run existing parallel codes within the Legion environment. The target audience is supercomputing experts who help scientists and other users get their codes parallelized and running on high performance systems.

Andrew S. Grimshaw is an Associate Professor of Computer Science and Director of the Institute of Parallel Computation at the University of Virginia. His research interests include high-performance parallel computing, heterogeneous parallel computing, compilers for parallel systems, operating systems, and high-performance parallel I/O. He is the chief designer and architect of Mentat and Legion. Grimshaw received his M.S. and Ph.D. from the University of Illinois at Urbana-Champaign in 1986 and 1988 respectively.

Andrew Grimshaw
Department of Computer Science
University of Virginia
Charlottesville, VA 22903

(804) 982-2204
fax: (804) 982-2214
grimshaw@Virginia.edu


TUTORIAL 4: Introduction to Performance Issues in Using MPI for Communication and I/O
William Gropp, Rusty Lusk, Rajeev Thakur
Argonne National Laboratory

Half-day Tutorial - Afternoon July 28, 1998

MPI is now widely accepted as a standard for message-passing parallel computing libraries. MPI-2, released in June 1997, adds additional capabilities to MPI, including remote-memory access and parallel I/O. The richness of MPI provides many ways to express an operation, such as exchanging messages for a grid-based computation or writing out a distributed array to a parallel file system. Choosing the best way requires both an understanding of the MPI approach to high performance and the capabilities of particular implementations of MPI.

This tutorial will discuss performance-critical issues in message-passing programs, explain how to examine the performance of an application using MPI-oriented tools, and show how the features of MPI can be used to attain peak application performance. It will be assumed that attendees have an understanding of the basic elements of the MPI specification. Experience with message-passing parallel applications will be helpful but not required.

William Gropp is a senior computer scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. After receiving his Ph.D. in Computer Science from Stanford University in 1982, he held the positions of Assistant (1982-1988) and Associate (1988-1990) Professor in the Computer Science Department of Yale University. In 1990, he joined the Numerical Analysis group at Argonne. His research interests are in parallel computing, software for scientific computing, and numerical methods for partial differential equations. He is a co-author of "Using MPI: Portable Parallel Programming with the Message-Passing Interface" and is a chapter author in the MPI-2 Forum. His current projects include the design and implementation of MPICH, a portable implementation of the MPI Message-Passing Standard, the design and implementation of PETSc, a parallel, numerical library for PDEs, and research into programming models for parallel architectures.

Ewing ("Rusty") Lusk is a senior computer scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. After receiving his Ph.D. in mathematics at the University of Maryland in 1970, he served first in the Mathematics Department and later in the Computer Science Department at Northern Illinois University before joining Argonne National Laboratory in 1982. He has been involved in the MPI standardization effort both as an organizer of the MPI-2 Forum and as a designer and implementor of the MPICH portable implementation of the MPI Standard. His current projects include design and implementation of the MPI-2 extensions to MPICH and research into programming models for parallel architectures. Past interests include automated theorem-proving, logic programming, and parallel computing. He is a co-author of several books in automated reasoning and parallel computing, including "Using MPI: Portable Parallel Programming with the Message-Passing Interface". He is the author of more than eighty research articles in mathematics, automated deduction, and parallel computing.

Rajeev Thakur is an assistant computer scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. He received a Ph.D. in Computer Engineering from Syracuse University in 1995. He is actively engaged in parallel I/O research, particularly in implementing portable parallel I/O interfaces and I/O characterization of parallel applications. He participated in the MPI-2 Forum to define a standard, portable interface for parallel I/O (MPI-IO). He is currently developing a high-performance, portable MPI-IO implementation called ROMIO.

Rajeev Thakur
Argonne National Laboratory
Building 221, Room C-247
Argonne, IL, 60439
Email: thakur@mcs.anl.gov
Phone: (630) 252-1682
Fax: (630) 252-5986


TUTORIAL 5: The Globus Grid Programming Toolkit
Gregor von Laszewski, Argonne National Laboratory
Steven Fitzgerald, Information Sciences Institute of the University of Southern California

Half-day Tutorial - Afternoon July 28, 1998

This tutorial is a introduction to the capabilities of the Globus grid programming toolkit. Computational grids promise to enable a wide range of emerging application concepts such as remote computing, distributed supercomputing, tele-immersion, smart instruments, and data mining. However, the development and use of such applications is in practice difficult and time consuming, because of the need to deal with complex and highly heterogeneous systems. The Globus grid programming toolkit is designed to help application developers and tool builders overcome these obstacles to the construction of "grid-enabled" scientific and engineering applications. It does this by providing a set of standard services for authentication, resource location, resource allocation, configuration, communication, file access, fault detection, and executable management. These services can be incorporated into applications and/or programming tools in a "mix-and-match" fashion to provide access to needed capabilities.

Our goal in this tutorial is both to introduce the capabilities of the Globus toolkit and to help attendees apply Globus services to their own applications. Hence, we will structure the tutorial as a combination of Globus system description and application examples.

Dr. Gregor von Laszewski obtained his Ph.D. in Computer Science from Syracuse University and is currently a researcher in the Mathematics and Computer Science Division at Argonne National Laboratory. He has been involved in the development of many Globus components and applications, and leads an effort to apply Globus services to the real-time analysis of data from scientific instruments.

Gregor von Laszewski
Argonne National Laboratory
Building 221, Room A
Argonne, IL, 60439
Email: gregor@mcs.anl.gov
Phone: (630) 252-0472
Fax: (630) 252-5986

Dr. Steven Fitzgerald received his D.Sc. in Computer Science from the University of Massachusetts Lowell. He is a researcher at the Information Sciences Institute of the University of Southern California and holds a faculty position at California State University Northridge. Steve's involvement in the Globus project has focused on Globus's information services, and the creation and deployment of the GUSTO testbed.