MPI Examples for the IHPCL Clusters

In this page


MPI Example

The OpenMPI implementation of MPI is the default version for the IHPCL clusters.  For some specific platforms other options are available, including MPICH, MVAPICH, LAM, and/or Intel MPI.  Using the non-defaults assumes a familiarity with how to adjust your PATH and other environment variables in linux, and exceeds the intent of this document.


OpenMPI

export PATH=/net/hj1/ihpcl/bin:$PATH # for bash/ksh/zsh

or

setenv PATH=/net/hj1/ihpcl/bin:$PATH # for csh/tcsh

mpicc -o hello hello.c

mpif77 -o hello hello.f

cscan warp      # To scan the entire warp cluster

cscan warp 3 7  # To scan just the nodes warp3 through warp7

warp3
warp4
warp4
warp7

NOTE: warp4 is listed twice to start two processes – one for each of the two processors.

mpirun -hostfile hosts -np 5 hello


 

Program Development Environment

We currently recommend using the Intel Compilers for all development on the x86, x86_64 and IA64 platforms.  OpenMPI is built using these compilers, so you will have access to them by default.  Optimization flag conventions are a little different from gcc – see man pages and/or Intel’s web manual for more information.

Compilers:     icc, icpc and ifort, are the compilers available for C , C++ (really just a wrapper for icc), and FORTRAN 77/90/95.

Debugger:     The Intel debugger idb is based on the ladebug project, which means it has built-in support for debugging of MPI-based codes.


Page Top

Documentation for MPI

All about the standard can be found at Argonne National Laboratory. Man pages for the MPI commands (both library functions and run-time commands) are also in /net/hj1/ihpcl/man, as well as being available online from the MPICH site .