(j3.2006) (SC22WG5.3866) MPI non-blocking transfers

Van Snyder Van.Snyder
Wed Jan 21 14:28:07 EST 2009

>     1) Most people seem to agree that the semantics of the buffers used
> for MPI non-blocking transfers and pending input/output storage
> affectors are essentially identical, with READ, WRITE and WAIT
> corresponding to MPI_Isend, MPI_IRecv and MPI_Wait (and variations).
> Do you agree with this and, if not, why not?

Yes, but we already have READ, WRITE and WAIT

>     2) Most people seem to agree that a data attribute is essential, and
> a purely procedure-based solution will not work.
> Do you agree with this and, if not, why not?

We needed one for I/O, but we ought not to need one for interaction with
procedures having defective interfaces.

>     3) It would be very easy to extend the wording of the ASYNCHRONOUS
> attribute etc. to allow for asynchronous I/O started and completed by a
> companion processor (including MPI, here).  It would also be very easy
> to add a new one (say, ASYNCH_EXTERNAL), with the same properties, but
> applying only to companion processor I/O.
> Do you think that adding a new attribute is desirable and, if so, why?

No, because we already have one with the desired functionality

>     4) For Fortran asynchronous I/O, the processor obviously knows when
> an input/output storage affector becomes and ceases to be pending.  From
> the point of view of program correctness, this information is not
> needed, but it might be useful for implementors.  I proposed such a
> mechanism, but it seemed to confuse some people.
> Do you believe that there needs to be a standardised mechanism for the
> companion processor to inform the Fortran processor of such state
> changes and, in either case, why or why not?

No.  I/O should be done with I/O statements.  Truckling to a defective
interface is silly.  I've already proposed two mechanisms to do
inter-program transport using I/O statements, one of which can be
connected to MPI or PVM or whatever your favorite transport protocol is.
See 08-204 and 08-205.

More information about the J3 mailing list