Lockless Inc


MPI_Ssend_init - Initialize a persistent synchronous send request


#include <mpi.h> int MPI_Ssend_init(void *buf , int count , MPI_Datatype datatype , int dest , int tag , MPI_Comm comm , MPI_Request *rq );

#include <pmpi.h> int PMPI_Ssend_init(void *buf , int count , MPI_Datatype datatype , int dest , int tag , MPI_Comm comm , MPI_Request *rq );

#include <mpi.h> MPI::Prequest MPI::Comm::Ssend_init(const void *buf , int count , const MPI::Datatype &datatype , int dest , int tag ) const;

INCLUDE 'mpif.h' MPI_SSEND_INIT(buf , count , datatype , dest , tag , comm , rq , ierr ) <type> buf (*) INTEGER count , datatype , dest , tag , comm , rq , ierr


buf - buffer (array)

count - count of elements in buffer (integer)

datatype - type of elements in buffer (handle)

dest - destination rank (integer)

tag - communication tag (integer)

comm - communicator for message (handle)


rq - request handle for the persistent request (handle)


The MPI_Ssend_init() function is used to create a persistent request that will perform a non-blocking synchronous send when MPI_Start() or MPI_Startall() is used. When that happens, a synchronous message will be sent to the process with rank dest in the communicator comm , using the tag envelope. This synchronous send will only be marked complete once a receive has been posted at the destination. This is different from the requests created by the MPI_Send_init() function which may complete early if the message is stored an internal buffer. The send consists of count elements of type datatype in an array specified by buf . This buffer may not be altered until the request rq is known to be completed via a function such as MPI_Wait().

When the request is complete, it may be restarted with another call to MPI_Start() or MPI_Startall(). When the persistent request is no longer needed, it should be freed with MPI_Request_free().

If the destination is MPI_PROC_NULL then no send is ever posted, and MPI_SUCCESS will be always returned.

The C++ version of this function passes the resulting request as the return value.

The communicator must be a valid one (not MPI_COMM_NULL) and the request must not be MPI_REQUEST_NULL. PMPI_Ssend_init() is the profiling version of this function.


All MPI routines except for MPI_Wtime and MPI_Wtick return an error code. The the current MPI error handler is invoked if the return value is not MPI_SUCCESS. The default error handler aborts, but this may be changed with by using the MPI_Errhandler_set() function. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned instead. Note that MPI does not guarentee that an MPI program can continue past an error. In this implementation, all errors except MPI_ERR_INTERN or MPI_ERR_OTHER should always be recoverable.

In C, the error code is passed as the return value. In FORTRAN, all functions have an parameter ierr which returns the error code. MPI C++ functions do not directly return an error code. However, C++ users may want to use the MPI::ERRORS_THROW_EXCEPTIONS handler. This will throw an MPI::Exception with the corresponding error code. To prevent exceptions from being raised from within C and Fortran code, they will see all error return values as MPI_ERR_PENDING when this handler is chosen. In this implementation, call MPI::throw_exception() to throw the correct exception if this occurs.

MPI_SUCCESS - No error;

MPI_ERR_PENDING - Pending exception;

MPI_ERR_COMM - Invalid communicator;

MPI_ERR_ARG - Invalid request pointer;

MPI_ERR_COUNT - Invalid element count;

MPI_ERR_BUFFER - Invalid buffer;

MPI_ERR_TAG - Invalid tag;

MPI_ERR_TYPE - Invalid data type;

MPI_ERR_RANK - Invalid destination;

MPI_ERR_REQUEST - Invalid request;

MPI_ERR_INTERN - Out of memory error.


MPI_Ssend (3) MPI_Issend (3) MPI_Send_init (3) MPI_Bsend_init (3) MPI_Rsend_init (3) MPI_Start (3) MPI_Startall (3) MPI_Wait (3) MPI_Request_free (3)

About Us Returns Policy Privacy Policy Send us Feedback
Company Info | Product Index | Category Index | Help | Terms of Use
Copyright © Lockless Inc All Rights Reserved.