MPICL is a subroutine library for collecting information on communication
and user-defined events in message-passing parallel
programs written in C or FORTRAN. In particular, for MPI programs it
uses the MPI profiling interface to automatically intercept calls to
MPI communication routines, eliminating the need to add more than a
few statements to the source code in order to collect the information.
By using the MPI_Pcontrol interface to the instrumentation commands,
a single version of the MPI program can be used whether the
instrumentation library is linked with the executable or not.
MPICL instruments the application code,
primarily using C routines that query the system clock
and save event-specific information in internal buffers, and
incurs more overhead than a good, direct, instrumentation of
the underlying system would. However, great care
has been taken to minimize this overhead, and versions of the instrumentation
layer have been used successfully in numerous performance studies
over the last 10 years. Being able to quickly port and use MPICL,
not having to depend on the existence or correctness of vendor-supplied
performance tools, has been crucial in many of these studies.
MPICL is typically used in one of two ways.
It can be used to collect profile data, summarizing
the number of occurrences, the associated volume statistics, and the time
spent in communication and user-defined events for each processor.
It can also be used to collect detailed traces of each event, which
can then be viewed using the
MPICL uses the PICL trace data format, described in the technical
report referenced below. This has been updated to include new
MPI-specific events, but the structure of the format is unchanged.
MPICL is an extension to the Portable Instrumented Communication
Library (PICL), a software package that
provided a portable message-passing interface in the days before the
MPI standard. The PICL message-passing commands simply call the
underlying native commands on each machine on which it is
implemented. While an MPI user need know nothing about PICL
message-passing, this does mean that MPICL can be used to
collect performance data for non-MPI programs. But information on
communication events is collected only if MPI or PICL message-passing
commands are used, or if the user instruments the message-passing
layer using MPICL instrumentation commands.
MPICL was developed by
P. H. Worley
at Oak Ridge National Laboratory and has been used by the author
for a variety of performance evaluation studies
since its initial incarnation in 1997.
However, MPICL IS RESEARCH SOFTWARE WITHOUT ANY GUARANTEE OR WARRANTY
THAT IT IS GOOD FOR ANYTHING OR SAFE TO USE.
It is being made available at this time to coordinate with the release
of the new MPI-aware version of ParaGraph.
Please notify and acknowledge the author in any research or
publications utilizing MPICL, or any part of the code.
Suggestions, bug reports, and (especially) new ports are also appreciated.
At the current time, the MPI instrumentation layer has been used
successfully on the Cray X1, HP/Compaq AlphaServer SC,
HP/Convex Exemplar, IBM SP, IBM p690 cluster, Intel Paragon,
SGI Altix, SGI/Cray Origin, SGI/Cray T3E, and on a network of
workstations using MPICH and LAM. MPICL is easily ported to any
standards-compliant version of MPI. The machine dependent code
primarily deals with the Fortran/C interface and, possibly, the use of
a higher resolution system clock.
The underlying PICL library works on a much larger set of machines
and communication libraries, most of which are obsolete:
MPL (IBM SP-1, SP-2)
MPI (IBM SP-2, Intel Paragon, SGI Origin, Cray T3D, Cray T3E,
network of workstations using MPICH)
PVM 3.3 (SUN and RS6000 workstations, Cray T3D, Cray T3E).
Note that the non-MPICL aspects of PICL are not exercised very often,
and some of these implementations may no longer work perfectly.
The following files are source code and documentation for MPICL.
The documentation restricts itself to the MPICL instrumentation layer.
For information on the PICL message-passing layer, follow the link at
the bottom of the page.