Anthony Skjellum received the BS degree from the California Institute of Technology in 1984 in Physics, MS in Chemical Engineering, 1985, and PhD in Chemical Engineering with Computer Science minor, 1990. He has worked in scientific and high performance computing since 1990 and was on the faculty at MSU from 1993 to 2003. He is currently a professor and the Chair of the Computer and Information Sciences Department at the University of Alabama at Birmingham.
Dr. Skjellum has made specific contributions in the area of portable, high performance message passing specifications and systems (MPI Forums, MPI/RT forum), including the widely used MPICH software jointly designed at Mississippi State University and Argonne National Laboratory. His research group at Mississippi State created a number of freeware libraries for scientific computing, including MPICH as well as object-oriented middleware for sparse, parallel and sequential linear algebra (PMLP). In 1996, he formed MPI Software Technology, Inc. which pursues commercial off-the-shelf software for message passing and mathematical libraries. Dr. Skjellum has received funding from NSF, NASA, DARPA, DOD, DOE, Intel, and others for research and advanced prototyping in scientific and high performance computing.
Publisher: The MIT Press
Collaborators: William Gropp, Ewing Lusk
The parallel programming community recently organized an effort to standardize the communication subroutine libraries used for programming on massively parallel computers such as the Cray T3D, Intel Paragon, and IBM SP2, as well as networks of workstations. The standard they developed, Message-Passing Interface (MPI), not only unifies within a common framework programs written using a variety of existing (and currently incompatible) parallel libraries but also allows for future portability of programs between machines. Three of the authors of MPI have teamed up here to present a tutorial on how to use MPI to write parallel programs, particularly for large-scale applications.
MPI, the long-sought standard for expressing algorithms and running them on a variety of computers, allows leveraging of software development costs across parallel machines and networks and will spur the development of a new level of parallel software. This timely book covers all the details of the MPI functions used in the motivating examples and applications, with many MPI functions introduced in context.
The topics covered include issues in portability of programs among MPP systems, examples and counter examples illustrating subtle aspects of the MPI definition, instructions for writing libraries that take advantage of MPI's special features, application paradigms for large-scale examples, complete program examples, visualization of program behavior with graphical tools, an implementation strategy and a portable implementation, use of MPI on workstation networks and on MPPs (Intel, Cray, IBM, Meiko), scalability and performance tuning, and instructions for converting existing codes to MPI.