## NAME

MPI_Graph_map - Returns an optimal rank for the given process in a graph

## SYNOPSIS

```#include <mpi.h>
int MPI_Graph_map(MPI_Comm comm , int nnodes , int *index , int *edges , int *newrank );
#include <pmpi.h>
int PMPI_Graph_map(MPI_Comm comm , int nnodes , int *index , int *edges , int *newrank );
#include <mpi.h>
int MPI::Graphcomm::Map(int nnodes , const int *index , const int *edges ) const;
INCLUDE 'mpif.h'
MPI_GRAPH_MAP(comm , nnodes , index , edges , newrank , ierr )
INTEGER  comm ,  nnodes ,  index (*),  edges (*),  newrank ,  ierr
```

## INPUT PARAMETERS

comm - communicator (handle)

nnodes - number of nodes in the graph (integer)

index - array of nnodes integers describing graph degrees (array)

edges - array of integers describing graph edges (array)

## OUTPUT PARAMETER

new_rank - optimal rank (integer).

## DESCRIPTION

The MPI_Graph_map() function is used to obtain the optimal placement for the current process within a graph. The topology of the graph is described by the index and edges arrays.

The size of the index array is given by nnodes . The index array contains the cumulative number of edges attached to each node. The edges array contains the ranks connected to each node sorted in node order. The index array thus allows one to find which set of edges connnect to a given node. A node n, is connected to the nodes listed from edges [ index [n-1]] to edges [ index [n]-1].

If nnodes is smaller than the size of the communicator comm , then some processes will receive MPI_UNDEFINED as output in new_rank . If there are more ranks in the graph than exist in the communicator, then the MPI_ERR_ARG error will be returned.

In C++, the newrank is passed as the return value.

The communicator must be a valid one (not MPI_COMM_NULL). PMPI_Graph_map() is the profiling version of this function.

## ERRORS

All MPI routines except for MPI_Wtime and MPI_Wtick return an error code. The the current MPI error handler is invoked if the return value is not MPI_SUCCESS. The default error handler aborts, but this may be changed with by using the MPI_Errhandler_set() function. The predefined error handler MPI_ERRORS_RETURN may be used to cause error values to be returned instead. Note that MPI does not guarentee that an MPI program can continue past an error. In this implementation, all errors except MPI_ERR_INTERN or MPI_ERR_OTHER should always be recoverable.

In C, the error code is passed as the return value. In FORTRAN, all functions have an parameter ierr which returns the error code. MPI C++ functions do not directly return an error code. However, C++ users may want to use the MPI::ERRORS_THROW_EXCEPTIONS handler. This will throw an MPI::Exception with the corresponding error code. To prevent exceptions from being raised from within C and Fortran code, they will see all error return values as MPI_ERR_PENDING when this handler is chosen. In this implementation, call MPI::throw_exception() to throw the correct exception if this occurs.

MPI_SUCCESS - No error;

MPI_ERR_PENDING - Pending exception;

MPI_ERR_COMM - Invalid communicator;

MPI_ERR_DIMS - Invalid graph topology information;

MPI_ERR_ARG - Invalid array pointers, or graph is too large.