SCL Cluster Cookbook|
Libraries for Parallel Applications
MPI is a
portable standard for message-passing libraries and was designed to
provide a common library specification for massively parallel
processor vendors. Parallel applications developed using MPI can be
run on systems ranging from clusters of inexpensive computers to large
parallel supercomputers without recoding. At least two
implementations of MPI are available free for use on clusters: MPICH from
Argonne National Labs, and LAM from the University of Notre
PVM is another library that supports parallel programs on clusters and is targeted specifically towards heterogeneous collections of UNIX computers.
The paper PVM and MPI: A Comparison of Features explores the differences between PVM and MPI for parallel computation.
An implementation of UNIX System V IPC (semaphores, messages, and shared memory) for distributed systems is available. Kamran Karimi <email@example.com> wrote in an email message, " DIPC (Distributed Inter-Process Communication) can be used to build clusters of PC computers. The programming model is the same as System V IPC. Currently DIPC is only available for Linux, but it could be ported to other UNIX variants like FreeBSD."
ScaLAPACK is a library
of parallelized linear algebra routines which operates on clusters
using PVM or MPI. ScaLAPACK requires an installation of the LAPACK linear algebra
routines and the BLACS
library for communication in linear algebra programs. These separate
pieces take a bit of work to configure and install (pre-built
libraries are available for a few platforms), but ScaLAPACK could save
a lot of time and effort if it helps avoid rewriting old code or
writing new parallelized code.
The Distribution of ASCI-Red Work provides utilities and libraries developed for the Intel ASCI Option Red supercomputer at Sandia National Laboratories. BLAS, FFT, and extended precision libraries for Pentium Pro processors are provided. The libraries are for use on single CPU or dual CPU systems running Linux, and they seem to be well optimized.