Writing and Running Parallel Software
At this writing,
hex is configured as a standard mathematics workstation.
Thus it has all the software available on a standard mathematics-department
Linux workstation, including compilers, MATLAB, Maple, and so forth.
What makes a cluster a cluster, though, is software that runs in parallel to take advantage of the distributive parallel model.
MPI is the most popular tool for writing parallel distributed software. Our old cluster also had the Parallel Virtual Machine (PVM) software available, but we have not implemented this system at this time.
In addition to the standard math-workstation software,
hex also has the following packages installed:
- BLAS (Basic Linear Algebra Subprograms;
- Provides a number of basic algorithms for numerical algebra.
- LAPACK (Linear Algebra PACKage;
- Provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. Associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, and generalized Schur) and related computations (i.e., reordering of Schur factorizations and estimating condition numbers) are also included. LAPACK can handle dense and banded matrices, but not general sparse matrices. Similar functionality is provided for real and complex matrices in both single and double precision. LAPACK is coded in Fortran77 and built with gcc.
- BLACS (Basic Linear Algebra Communication Subprograms;
- A collection of optimized routines designed to effectively communicate linear algebra commands between nodes. Most users of this cluster will actually not use BLACS directly; it provides a communications base for SCALAPACK.
- SCAlable Linear Algebra PACKage (SCALAPACK;
- A collection of linear algebra routines that runs on a supercomputer, using BLACS and LAPACK for underlying support. This is the most useful mathematics package for most users, as it hides most of the sometimes complicated programming necessary for multiprocessing.