The default destination for all of the cluster-related software is in /net/hj1/ihpcl/bin. Links are maintained in this directory so that you can keep the same value in your PATH variable under both linux and solaris. A recommended stub file to update your .profile or .login, and instructions for using it, can be found at this location.
Since MPI plays such an important roll in the high performance computing world, I've given it its own section. For a (relatively) gentle introduction to mpi in the IHPCL environment, please check this step-by-step guide. With the latest versions of mpich and LAM (two different implementations of MPI), there seems to be relatively little difference in performance.
As a general rule, however, we are still recommending that people use the LAM MPI implementation, as it has some slight advantages in diagnostics, as well as working with xmpi (a visual diagnostic tool for mpi programs). The mapping of compilers to libraries and calling scripts is as follows:
|Script Name||MPI Library||Compiler|
|hf77||LAM||pgf90||NOTE: Fortran 90, not 77|
For more information on the MPI subroutines, please consult the on-line man pages, provided with the mpich distribution. (Note that we do not currently have any of the MPE subroutines installed -- contact ihpc-admin@cc if you have any questions.) The documentation on the subroutine calls is the same for MPICH and LAM; to get information on mpirun and other LAM utilities run "man -M /net/hj1/ihpcl/i586rh-7.1/lam-6.5.4/man mpirun" (or whatever other program) from any unix prompt within the CoC domain.