Lockless Inc


MPI_Graph_create - Creates a graph communicator


#include <mpi.h> int MPI_Graph_create(MPI_Comm comm , int nnodes , int *index , int *edges , int reorder , MPI_Comm *out );

#include <pmpi.h> int PMPI_Graph_create(MPI_Comm comm , int nnodes , int *index , int *edges , int reorder , MPI_Comm *out );

#include <mpi.h> MPI::Graphcomm MPI::Intracomm::Create_graph(int nnodes , const int *index , const int *edges , bool reorder ) const;

INCLUDE 'mpif.h' MPI_GRAPH_CREATE(comm , nnodes , index , edges , reorder , out , ierr ) INTEGER comm , nnodes , index (*), edges (*), reorder , out , ierr


comm - communicator (handle)

nnodes - number of nodes (integer)

index - array of nnodes integers describing graph degrees (array)

edges - array of integers describing graph edges (array)

reorder - reorder ranks for optimal placement (boolean)


out - the new graph communicator (handle).


The MPI_Graph_create() function is used to create a new graph communicator out using the index and edges arrays to describe the topology of the graph.

The size of the index array is given by nnodes . The index array contains the cumulative number of edges attached to each node. The edges array contains the ranks connected to each node sorted in node order. The index array thus allows one to find which set of edges connnect to a given node. A node n, is connected to the nodes listed from edges [ index [n-1]] to edges [ index [n]-1].

If reorder is true, then the output ranks will be arranged in an optimal order. Otherwise, the same order as in comm is chosen.

If the size of the new communicator is smaller than current communicator, then some of the nodes will receive an output of MPI_COMM_NULL. The new communicator should eventually be freed with MPI_Comm_free.

Note that the input communicator must be valid (not MPI_COMM_NULL). PMPI_Graph_create() is the profiling version of this function.


All MPI routines except for MPI_Wtime and MPI_Wtick return an error code. The the current MPI error handler is invoked if the return value is not MPI_SUCCESS. The default error handler aborts, but this may be changed with by using the MPI_Errhandler_set() function. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned instead. Note that MPI does not guarentee that an MPI program can continue past an error. In this implementation, all errors except MPI_ERR_INTERN or MPI_ERR_OTHER should always be recoverable.

In C, the error code is passed as the return value. In FORTRAN, all functions have an parameter ierr which returns the error code. MPI C++ functions do not directly return an error code. However, C++ users may want to use the MPI::ERRORS_THROW_EXCEPTIONS handler. This will throw an MPI::Exception with the corresponding error code. To prevent exceptions from being raised from within C and Fortran code, they will see all error return values as MPI_ERR_PENDING when this handler is chosen. In this implementation, call MPI::throw_exception() to throw the correct exception if this occurs.

MPI_SUCCESS - No error;

MPI_ERR_PENDING - Pending exception;

MPI_ERR_COMM - Invalid communicator;

MPI_ERR_DIMS - Invalid dimensions;

MPI_ERR_ARG - Invalid output pointer, or graph topology information;

MPI_ERR_INTERN - Out of Memory. This may be fatal.


MPI_Cart_create (3) MPI_Graph_create (3) MPI_Graphdims_get (3) MPI_Graph_get (3) MPI_Graph_neighbors_count (3) MPI_Graph_neighbors (3) MPI_Graph_map (3) MPI_Comm_free (3)

About Us Returns Policy Privacy Policy Send us Feedback
Company Info | Product Index | Category Index | Help | Terms of Use
Copyright © Lockless Inc All Rights Reserved.