Comments on the MPI standard should be mailed to mpi-core@mpi-forum.org. Page and line numbers refer to the official MPI-2 document, not the HPCA issue or the 2nd edition of the Complete Reference.
Useful links:
This needs more discussion. The problem is that some of the C++
datatypes have no easily defined counterparts in C or Fortran. The
minimum fix is really a clarification that says that there is no
interlanguage support for the C++ complex types.
Proposed text
Page 276, after line 4, addExtending the C++ datatypes to C and Fortran needs to include MPI::BOOL as well as the complex types, and should define what the equivalent types are in C and Fortran. The real issue here is the MPI:F_COMPLEX and completing the list of such routines.Advice to users.
Most but not all datatypes in each language have corresponding datatypes in other languages. For example, there is no C or Fortran counterpart to the MPI::BOOL or the the MPI::COMPLEX, MPI::DOUBLE_COMPLEX, or MPI:LONG_DOUBLE_COMPLEX. End of advice to users.
Page 165, lines 25-38 read
int MPI_Alltoallw(void *sendbuf, int sendcounts[], int sdispls[], MPI_Datatype sendtypes[], void *recvbuf, int recvcounts[], int rdispls[], MPI_Datatype recvtypes[], MPI_Comm comm) MPI_ALLTOALLW(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPES(*), RECVCOUNTS(*), RDISPLS(*), RECVTYPES(*), COMM, IERROR void MPI::Comm::Alltoallw(const void* sendbuf, const int sendcounts[], const int sdispls[], const MPI::Datatype sendtypes[], void* recvbuf, const int recvcounts[], const int rdispls[], const MPI::Datatype recvtypes[]) const = 0but should read
int MPI_Alltoallw(void *sendbuf, int sendcounts[], MPI_Aint sdispls[], MPI_Datatype sendtypes[], void *recvbuf, int recvcounts[], MPI_Aint rdispls[], MPI_Datatype recvtypes[], MPI_Comm comm) MPI_ALLTOALLW(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), SENDTYPES(*), RECVCOUNTS(*), RECVTYPES(*), COMM, IERROR INTEGER (KIND=MPI_ADDRESS_KIND) SDISPLS(*), RDISPLS(*) void MPI::Comm::Alltoallw(const void* sendbuf, const int sendcounts[], const MPI::Aint sdispls[], const MPI::Datatype sendtypes[], void* recvbuf, const int recvcounts[], const MPI::Aint rdispls[], const MPI::Datatype recvtypes[]) const = 0These modifications also need to be made in Appendix A.6.4, A.7.4, A.8.4:
- page 312, line 34 (C)
- page 322, line 41 (Fortran)
- page 335, line 16 (C++)
Page 164, line 16-30 should read:7.3.5. Generalized All-to-all Functions
One of the basic data movement operations needed in parallel signal processing is the 2-D matrix transpose. This operation has motivated two generalizations of the MPI_ALLTOALLV function. These new collective operations are MPI_ALLTOALLW and MPI_ALLTOALLX; the ``W'' indicates that it is an extension to MPI_ALLTOALLV, and ``X'' indicates that it is an extension to MPI_ALLTOALLW. MPI_ALLTOALLX is the most general form of All-to-all. Like MPI_TYPE_CREATE_STRUCT, the most general type constructor, MPI_ALLTOALLW and MPI_ALLTOALLX allow separate specification of count, displacement and datatype. In addition, to allow maximum flexibility, the displacement of blocks within the send and receive buffers is specified in bytes. In MPI_ALLTOALLW, these displacements are specified as integer arguments and in MPI_ALLTOALLX they are specified as address integer.Rationale. The MPI_ALLTOALLW function generalizes several MPI functions by carefully selecting the input arguments. For example, by making all but one process have sendcounts[i] = 0, this achieves an MPI_SCATTERW function. MPI_ALLTOALLX allows the usage of MPI_BOTTOM as buffer argument and defining the different buffer location via the displacement arguments rather than only via different datatype arguments. (End of rationale.)
Add to page 165, after line 38:
MPI_ALLTOALLX(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)
- [ IN sendbuf]
- starting address of send buffer (choice)
- [ IN sendcounts]
- integer array equal to the group size specifying the number of elements to send to each processor (array of integers)
- [ IN sdispls]
- integer array (of length group size). Entry j specifies the displacement in bytes (relative to sendbuf) from which to take the outgoing data destined for process j (array of integers)
- [ IN sendtypes]
- array of datatypes (of length group size). Entry j specifies the type of data to send to process j (array of handles)
- [ OUT recvbuf]
- address of receive buffer (choice)
- [ IN recvcounts]
- integer array equal to the group size specifying the number of elements that can be received from each processor (array of integers)
- [ IN rdispls]
- integer array (of length group size). Entry i specifies the displacement in bytes (relative to recvbuf) at which to place the incoming data from process i (array of integers)
- [ IN recvtypes]
- array of datatypes (of length group size). Entry i specifies the type of data received from process i (array of handles)
- [ IN comm]
- communicator (handle)
int MPI_Alltoallx(void *sendbuf, int sendcounts[], MPI_Aint sdispls[], MPI_Datatype sendtypes[], void *recvbuf, int recvcounts[], MPI_Aint rdispls[], MPI_Datatype recvtypes[], MPI_Comm comm) MPI_ALLTOALLx(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR)Add to page 312, after line 37:SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), SENDTYPES(*), RECVCOUNTS(*), RECVTYPES(*), COMM, IERROR INTEGER (KIND=MPI_ADDRESS_KIND) SDISPLS(*), RDISPLS(*) void MPI::Comm::Alltoallx(const void* sendbuf, const int sendcounts[], const MPI::Aint sdispls[], const MPI::Datatype sendtypes[], void* recvbuf, const int recvcounts[], const MPI::Aint rdispls[], const MPI::Datatype recvtypes[]) const = 0
int MPI_Alltoallx(void *sendbuf, int sendcounts[], MPI_Aint sdispls[], MPI_Datatype sendtypes[], void *recvbuf, int recvcounts[], MPI_Aint rdispls[], MPI_Datatype recvtypes[], MPI_Comm comm)Add to page 322, after line 45:
MPI_ALLTOALLx(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR)Add to page 335, after line 19:SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), SENDTYPES(*), RECVCOUNTS(*), RECVTYPES(*), COMM, IERROR INTEGER (KIND=MPI_ADDRESS_KIND) SDISPLS(*), RDISPLS(*)
void MPI::Comm::Alltoallx(const void* sendbuf, const int sendcounts[], const MPI::Aint sdispls[], const MPI::Datatype sendtypes[], void* recvbuf, const int recvcounts[], const MPI::Aint rdispls[], const MPI::Datatype recvtypes[]) const = 0
Proposed change
Page 79, Line 11 isMPI_UNPACK_EXTERNAL (datarep, inbuf, incount, datatype, outbuf, outsize, position)but should beMPI_UNPACK_EXTERNAL (datarep, inbuf, insize, position, outbuf, outcount, datatype)
Page 337, line 31-32 readsbool MPI::Win::Get_attr(const MPI::Win&win, int win_keyval, void* attribute_val) constbut should readbool MPI::Win::Get_attr(int win_keyval, void* attribute_val) const
Note that this error does not appear in the MPI-2 standard (see page 40).
MPI_Scan
in the MPI 1.1 Standard on page 128,
line 11, has an extraneous root
argument. That line should be
This could be added to section 3.2.10 (Minor Corrections) in the MPI 2 document.MPI_Scan( a, answer, 1, sspair, myOp, comm );
Page 179, lines 4-5 change
Thus, the names of MPI_COMM_WORLD, MPI_COMM_SELF, and MPI_COMM_PARENT will have the default of MPI_COMM_WORLD, MPI_COMM_SELF, and MPI_COMM_PARENT.to
Thus, the names of MPI_COMM_WORLD, MPI_COMM_SELF, and the communicator returned by MPI_COMM_GET_PARENT (if not MPI_COMM_NULL) will have the default of MPI_COMM_WORLD, MPI_COMM_SELF, and MPI_COMM_PARENT.Page 94, line 3-5, change
* The manager is represented as the process with rank 0 in (the remote * group of) MPI_COMM_PARENT. If the workers need to communicate among * themselves, they can use MPI_COMM_WORLD.to
* The manager is represented as the process with rank 0 in (the remote * group of) the parent communicator. If the workers need to communicate among * themselves, they can use MPI_COMM_WORLD.
This error is in MPI The Complete Reference. Is it in the standard as well?
MPI_IN_PLACE
in description
of MPI_ALLGATHER
and MPI_ALLGATHERV
This item needs a proposed erratum.
MPI-2, section 8.2, page 172 mentions MPI_REQUEST_CANCEL
;
this should be MPI_CANCEL
.
\mpifbind{MPI\_FILE\_GET\_VIEW(FH, DISP, ETYPE, FILETYPE, DATAREP, IERROR)\fargs INTEGER FH, ETYPE, FILETYPE, IERROR \\ CHARACTER*(*) DATAREP, INTEGER(KIND=MPI\_OFFSET\_KIND) DISP}to
\mpifbind{MPI\_FILE\_GET\_VIEW(FH, DISP, ETYPE, FILETYPE, DATAREP, IERROR)\fargs INTEGER FH, ETYPE, FILETYPE, IERROR \\ CHARACTER*(*) DATAREP\\ INTEGER(KIND=MPI\_OFFSET\_KIND) DISP}in io-2.tex. See page 223, line 19. (replace the comma after the declaration of datarep)
\mpifbind{MPI\_TYPE\_CREATE\_HVECTOR(COUNT, BLOCKLENGTH, STIDE, OLDTYPE, NEWTYPE, IERROR)\fargs INTEGER COUNT, BLOCKLENGTH, OLDTYPE, NEWTYPE, IERROR\\INTEGER(KIND=MPI\_ADDRESS\_KIND) STRIDE}to
\mpifbind{MPI\_TYPE\_CREATE\_HVECTOR(COUNT, BLOCKLENGTH, STRIDE, OLDTYPE, NEWTYPE, IERROR)\fargs INTEGER COUNT, BLOCKLENGTH, OLDTYPE, NEWTYPE, IERROR\\INTEGER(KIND=MPI\_ADDRESS\_KIND) STRIDE}in misc-2.tex (see page 66, line 26) (replace STIDE with STRIDE).
The communicator argument is missing from the MPI communication calls.
The variable base
should be declared
as MPI_Aint
, not int
.
This section contains the discussion on these ambiguities and in cases where a consensus emerged, text has been proposed.
Does MPI_ALLOC_MEM return a null pointer when a request for memory cannot be satisfied but a request for a smaller amount may work? The question is really if the user must set MPI_ERRORS_RETURN on MPI_COMM_WORLD before calling MPI_ALLOC_MEM if the user wants to handle "not enough memory for your request" errors.
Some names in the MPI Namespace in the C++ binding can conflict with C preprocessor names in standard include files. Examples include MPI:SEEK_SET} (conflicts with SEEK_SET in stdio.h).
Page 163, line 22 reads
Within each group, all processes provide the same recvcounts argument, and the sum of the recvcounts entries should be the same for the two groups.
but should read
Within each group, all processes provide the same recvcounts argument, and the sum of the recvcounts entries and datatype should specify the be the same type signature for the two groups.
Page 114, after line 4 (and after the lines added about MPI_PROC_NULL), add
After an RMA operations with rank MPI_PROC_NULL, it is still necessary to finish the RMA epoch with the synchronization method that has started the epoch.
In addition to changes for MPI::Datatype, add these changes
In the second ballot, we voted to remove const from MPI::Datatype on pages 343, 344, and 345.
The initial mail contains some comments that may be appropriate clarifications that do not change the standard.
The description of the send and receive counts arguments could be interpreted as allowing the receive counts to be at least as large as required by the send count, rather than exactly matching the count as defined by the type signatures.
MPI_File_get_view
return copies of the datatypes for
the filetype and etype?
The question really is "can (and must) the user free those datatypes"? For other MPI routines, the answer is always yes, but here the original datatype may be a predefined type, which may not be freed.
Page 166, after line 47, addAdvice to users:
No in-place version is specified for MPI_EXSCAN because it isn't clear what this means for the process for rank zero.
End of advice to users.
The text added in MPI 1.1 on the error return for MPI_Waitall etc. is written as if the only error handler is MPI_ERRORS_RETURN.
No changes needed.
Blocklengths of zero are allowed. Do we need to add a statement to this effect?
The "in place" option for intracommunicators is specified by passing MPI_IN_PLACE in the sendbuf argument. In this case, on each process, the input data is taken from recvbuf. Process i gets the ith segment of the result, and it is stored at the location corresponding to segment i in recvbuf.
The ISO/IEC Standard for C has added a number of new required and optional datatypes, such as int32_t and _Bool.
In brief, in some cases when using dynamic processes, an application may need to know when a process, recently disconnected with MPI_Comm_disconnect, has exited. There is no easy way within MPI to do this since MPI_Comm_disconnect doesn't wait for the process to exit (and it shouldn't, of course).
MPI_Reduce_scatter
This proposes an extension to MPI to add a constant block-size version of
MPI_Reduce_scatter
, much as MPI-2 added
MPI_Type_create_indexed_block
. This allows implementations to
optimize the implementation of this routine. Several uses of
MPI_Reduce_scatter
with constant block sizes have recently been
discussed at the Euro PVMMPI meetings.
The example appears to make use of data before the necessary
MPI_Win_complete
is called to end the exposure epoch.
The use of integers (and even address-sized integers) in MPI routines such as the point-to-point routines and datatype creation can limit the (at least with the natural choice of arguments) size of message that may be sent. Some MPI users now want to send messages greater than 2 GB; the use of MPI datatypes to describe file layouts can also run into trouble on 32-bit systems.
This proposes a form of non-blocking connect and accept.