MPI Examples for the IHPCL Clusters
The OpenMPI implementation of MPI is the default version for the IHPCL
clusters. For some specific platforms other options are available,
including MPICH, MVAPICH, LAM, and/or Intel MPI. Using the non-defaults
assumes a familiarity with how to adjust your PATH and other environment variables
in linux, and exceeds the intent of this document.
- Get a copy of the distributed "hello world"
program in either its C form or its Fortran form.
- To compile the program, first make sure that the mpicc/mpif77/mpif90
command is found by adding the directory /net/hj1/ihpcl/bin/ to the start
of your PATH environment variable:
- It is important that this be at the start of your PATH
variable, as some of the machines have a (broken) version of MPI
installed in /usr/bin. You will do best to add a line like the
following to your .*rc file (ie .bashrc for bash shell, .cshrc for
export PATH=/net/hj1/ihpcl/bin:$PATH # for
setenv PATH=/net/hj1/ihpcl/bin:$PATH # for csh/tcsh
- MPI requires a mechanism for launching processes on remote
machines. In the secured environment within the IHPCL Clusters, to use MPI
you'll need to utilize an ssh key.
Before using mpirun, follow the instructions below that best match your
status with regards to configuration of an ssh key.
1. You already have an ssh key in your .ssh directory and its public
version is in your authorized_keys file. If you are in this group, no
action is needed. Please proceed to the ssh-agent configuration step.
2. You do not have an ssh key. Create and configure one by using the
following commands (can be copied and pasted as a block to the command
-b 1024 -t rsa -f ~/.ssh/id_rsa -N '' # (Use two single quotes.)
~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
You should now have an appropriately configured ssh key to use. To test
it, try sshing from one CoC machine to another. If you are not prompted
for your password, then ssh key creation and configuration has been successful.
You will most likely be prompted to accept the ssh key for the remote
machine – this is expected behavior until you have updated the
known_hosts file as below.
- Note that creating an ssh key will only need to be done
- Append a list of IHCPL RSA fingerprints to your
known_hosts file. You can do this by typing
assuming you have already changed your PATH variable
to include /net/hj1/ihpcl/bin.
You will only need to do this once, although it won't
hurt to rerun it periodically.
- Enable the use of ssh-agent by typing ssh-agent
/bin/bash or ssh-agent /usr/local/bin/tcsh. Then type ssh-add
to associate your ssh key with the agent. If your key has an associated
passphrase you will be asked to enter it.
If ssh-add complains about connecting to your authentication agent, type eval
`/usr/bin/ssh-agent -s` and try again.
Note that you will need to configure ssh-agent each time you start a new
- Now that the execution environment is up, let’s
return to the MPI hello world program. Compile it on a server [ie ccil
for the warp cluster, gondor for the rohan, etc.] for use on any of its
cluster machines. Here's a sample command line:
mpicc -o hello hello.c
mpif77 -o hello hello.f
- To run the compiled program, you have to log on to one of
the machines in the cluster. So, for example, you might log on to warp3 or
- The command cscan (in /net/hj1/ihpcl/bin) allows you to
quickly scan through a cluster to see what nodes are free. For
cscan warp # To scan the entire warp
cscan warp 3 7 # To scan just the nodes warp3
- Once you have determined a set of free processors for your
code to run on, you then need to create a machinefile hosts listing the
machines you want to run on. A sample file follows:
NOTE: warp4 is listed twice to
start two processes – one for each of the two processors.
- You can now run the program using the command mpirun (also
found in /net/hj1/ihpcl/bin).
-hostfile hosts -np 5 hello
- The option -np specifies how many processors you want to
run your program on. So in this case, 5 processes will be invoked.
- An alternative invocation that OpenMPI allows
Program Development Environment
We currently recommend using the Intel Compilers for all development on the x86,
x86_64 and IA64 platforms. OpenMPI is built using these compilers, so you
will have access to them by default. Optimization flag conventions are a
little different from gcc – see man pages and/or Intel’s web manual
for more information.
icpc and ifort, are the compilers available for C , C++ (really
just a wrapper for icc), and FORTRAN 77/90/95.
Debugger: The Intel
debugger idb is based on the ladebug project, which means it has
built-in support for debugging of MPI-based codes.
Documentation for MPI
All about the standard can be found at Argonne National Laboratory. Man pages
for the MPI commands (both library functions and run-time commands) are also in
/net/hj1/ihpcl/man, as well
as being available online from the
MPICH site .