[ HDF5 Tutorial Top ] [ Parallel HDF5 Topics Top ] [ Next ] [ Prev ]

Parallel Programming with HDF5


This tutorial assumes that you are somewhat familiar with parallel programming with MPI (Message Passing Interface). Some of the terms that you must understand are:

MPI Communicator:
Allows a group of processes to communicate with each other.

Following are the MPI routines for initializing MPI and the communicator and finalizing a session with MPI:

    C               Fortran          Description
    --              -------          -----------
    MPI_Init        MPI_INIT         Initialize MPI (MPI_COMM_WORLD usually)

    MPI_Comm_size   MPI_COMM_SIZE    Define how many processes are contained 
                                     in the communicator

    MPI_Comm_rank   MPI_COMM_RANK    Define the process ID number within 
                                     the communicator (from 0 to n-1)

    MPI_Finalize    MPI_FINALIZE     Exiting MPI

Collective:
MPI defines this to mean "all processes of the communicator must participate in the right order."
Parallel HDF5 opens a parallel file with a communicator. It returns a file handle to be used for future access to the file. All processes are required to participate in the collective Parallel HDF5 API. Different files can be opened using different communicators.

Examples of what you can do with the Parallel HDF5 collective API:

Once a file is opened by the processes of a communicator: Please refer to the Supported Configuration Features in the current release of HDF5 for an up-to-date list of the platforms that we support Parallel HDF5 on:
  http://hdf.ncsa.uiuc.edu/HDF5/release/SuppConfigFeats.html


NCSA
The National Center for Supercomputing Applications

University of Illinois at Urbana-Champaign

hdfhelp@ncsa.uiuc.edu

Last Modified: November 16, 2001