Lockless Inc


MPI_Alltoall - Send buffers from all processes to all other processes in a communicator


#include <mpi.h> int MPI_Alltoall(void *sendbuf , int sendcount , MPI_Datatype sendtype , void *recvbuf , int recvcount , MPI_Datatype recvtype , MPI_Comm comm );

#include <pmpi.h> int PMPI_Alltoall(void *sendbuf , int sendcount , MPI_Datatype sendtype , void *recvbuf , int recvcount , MPI_Datatype recvtype , MPI_Comm comm );

#include <mpi.h> void MPI::Comm::Alltoall(const void *sendbuf , int sendcount , const MPI::Datatype &sendtype , void *recvbuf , int recvcount , const MPI::Datatype &recvtype ) const;

INCLUDE 'mpif.h' MPI_ALLTOALL(sendbuf , sendcount , sendtype , recvbuf , recvcount , recvtype , comm , ierr ) <type> sendbuf (*), recvbuf (*) INTEGER sendcount , sendtype , recvcount , recvtype , comm , ierr


sendbuf - send buffer (array)

sendcount - count of elements to send to each rank (integer)

sendtype - type of elements in send buffer (handle)

recvcount - count of elements to receive per rank (integer)

recvtype - type of elements in receive buffer (handle)

comm - communicator for messages (handle)


recvbuf - receive buffer (array)


The MPI_Alltoall() function is used to send messages from all processes to all other processes in the communicator comm . The messages are then concatenated into the receive buffer, recvbuf in rank order. This differs from the MPI_Allgather() function in that the messages sent from each process differ instead of being the same. Thus the send buffer is a vector describing the differing messages to send, and the receive buffer will on completion contain a vector of messages from every rank.

The sends consist of sendcount elements of type sendtype in an array specified by sendbuf . Similarly, the receives consist of recvcount elements of type recvtype in an array specified by recvbuf . Each of these is multiplied by the number of ranks in the communicator due to this function sending a different message per rank.

Specifying MPI_IN_PLACE for a buffer is not supported in this function.

Note that each process should send the same amount of data. If that is not the case, then the messages will be silently truncated to the size expected. Note that for performance reasons this MPI library does not check that the sent and received datatypes match.

The communicator must be a valid one (not MPI_COMM_NULL). PMPI_Alltoall() is the profiling version of this function.


All MPI routines except for MPI_Wtime and MPI_Wtick return an error code. The the current MPI error handler is invoked if the return value is not MPI_SUCCESS. The default error handler aborts, but this may be changed with by using the MPI_Errhandler_set() function. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned instead. Note that MPI does not guarentee that an MPI program can continue past an error. In this implementation, all errors except MPI_ERR_INTERN or MPI_ERR_OTHER should always be recoverable.

In C, the error code is passed as the return value. In FORTRAN, all functions have an parameter ierr which returns the error code. MPI C++ functions do not directly return an error code. However, C++ users may want to use the MPI::ERRORS_THROW_EXCEPTIONS handler. This will throw an MPI::Exception with the corresponding error code. To prevent exceptions from being raised from within C and Fortran code, they will see all error return values as MPI_ERR_PENDING when this handler is chosen. In this implementation, call MPI::throw_exception() to throw the correct exception if this occurs.

MPI_SUCCESS - No error;

MPI_ERR_PENDING - Pending exception;

MPI_ERR_COMM - Invalid communicator;

MPI_ERR_COUNT - Invalid element count;

MPI_ERR_BUFFER - Invalid buffer;

MPI_ERR_TYPE - Invalid data type;

MPI_ERR_INTERN - Out of Memory. This may be fatal.


MPI_Send (3) MPI_Recv (3) MPI_Bcast (3) MPI_Allgather (3) MPI_Alltoallv (3) MPI_Alltoallw (3)

About Us Returns Policy Privacy Policy Send us Feedback
Company Info | Product Index | Category Index | Help | Terms of Use
Copyright © Lockless Inc All Rights Reserved.