Lockless Inc


MPI_Scan - Reduce a buffer from each process, and store the cumulative results in each process in a communicator


#include <mpi.h> int MPI_Scan(void *sendbuf , void *recvbuf , int count , MPI_Datatype datatype , MPI_Op op , MPI_Comm comm );

#include <pmpi.h> int PMPI_Scan(void *sendbuf , void *recvbuf , int count , MPI_Datatype datatype , MPI_Op op , MPI_Comm comm );

#include <mpi.h> void MPI::Intracomm::Scan(const void *sendbuf , void *recvbuf , int count , const MPI::Datatype &datatype , const Op &op ) const;

INCLUDE 'mpif.h' MPI_SCAN(sendbuf , recvbuf , count , datatype , op , comm , ierr ) <type> sendbuf (*), recvbuf (*) INTEGER count , datatype , op , comm , ierr


sendbuf - send buffer (array)

count - count of elements in buffers (integer)

datatype - type of elements in buffers (handle)

op - reduction operation to perform (handle)

comm - communicator for messages (handle)


recvbuf - receive buffer (array)


The MPI_Scan() function is used to perform a cumulative reduction over a set of buffers in a communicator, comm . The cumulative results of the reduction are stored in the recvbuf buffer in each process. In effect, each rank does a reduction with the ranks lower than it in the communicator and stores the result. The zeroth rank simply copies the data from the send buffer to the receive buffer.

The scan is performed over count elements of type datatype in an array specified by sendbuf .

The operation to perform, op , must be allowable for the datatype chosen. The operation is assumed to be associative, but may not necessarily be commutative. If op is a user operator, the corresponding user function will be called. It is unspecified on which ranks the function will execute, so all processes should pass the same value to this function.

If sendbuf is MPI_IN_PLACE then the source of the message is taken to be recvbuf . This may result in less internal copy operations.

Note that each process should handle the same amount of data. If that is not the case, then the messages will be silently truncated, or the operation function may be called on uninitialized data. Note that for performance reasons this MPI library does not check that the sent and received datatypes match.

The communicator must be a valid one (not MPI_COMM_NULL). PMPI_Scan() is the profiling version of this function.


All MPI routines except for MPI_Wtime and MPI_Wtick return an error code. The the current MPI error handler is invoked if the return value is not MPI_SUCCESS. The default error handler aborts, but this may be changed with by using the MPI_Errhandler_set() function. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned instead. Note that MPI does not guarentee that an MPI program can continue past an error. In this implementation, all errors except MPI_ERR_INTERN or MPI_ERR_OTHER should always be recoverable.

In C, the error code is passed as the return value. In FORTRAN, all functions have an parameter ierr which returns the error code. MPI C++ functions do not directly return an error code. However, C++ users may want to use the MPI::ERRORS_THROW_EXCEPTIONS handler. This will throw an MPI::Exception with the corresponding error code. To prevent exceptions from being raised from within C and Fortran code, they will see all error return values as MPI_ERR_PENDING when this handler is chosen. In this implementation, call MPI::throw_exception() to throw the correct exception if this occurs.

MPI_SUCCESS - No error;

MPI_ERR_PENDING - Pending exception;

MPI_ERR_COMM - Invalid communicator;

MPI_ERR_COUNT - Invalid element count;

MPI_ERR_BUFFER - Invalid buffer;

MPI_ERR_TYPE - Invalid data type;

MPI_ERR_OP - Invalid operation;

MPI_ERR_INTERN - Out of Memory. This may be fatal.


MPI_Op_create (3) MPI_Op_free (3) MPI_Reduce (3) MPI_Allreduce (3) MPI_Reduce_scatter (3) MPI_Exscan (3)

About Us Returns Policy Privacy Policy Send us Feedback
Company Info | Product Index | Category Index | Help | Terms of Use
Copyright © Lockless Inc All Rights Reserved.