Next: Graphical overview
Up: QMC-MPP
Previous: Run the programm
Basically two possibilities exist for running
QIRM, traditionally as usual binary on one node or massively parallel distributed
over N nodes. As single QMC runs are independent and only the Least Square
Optimization needs a communication, QIRM should scale linearly up to several
1000 nodes. This was tested up to 392 nodes on a T3E of the ZIB in Berlin.
Parallel computation can be achieved on different paths depending on
the options offered by the platform:
- MPP, no MPI: On a CRAY T3E
compiler extensions are available. To these functions refer the 1PE versions of the
modul `hrsysmpp'. One has to set in `Makedefault' appropriately the macros
`MPP=hrmympp.' and `MPI=hrmmpi_dum.'.
- MPI, no MPP: If instead an MPI implementation as MPICH or LAMMPI
is available, then the above effects can be produced by MPI functions. To this end,
the macros `MPP=hrmympp. hrsysmpi.' and `MPI=hrmmpi.' have to be set.
- MPP+MPI: As the compiler automatically evokes the MPP routines, the choice
for the macros is given by `MPP=hrmympp.' and MPI=`hrmmpi.'.
- no MPP, no MPI: The 1PE version applies with macros
`MPP=hrmympp. hrsysnomp.' and `MPI=hrmmpi_dum.'.
Next: Graphical overview
Up: QMC-MPP
Previous: Run the program
Robert Bahnsen
1/28/2002