(j3.2006) (SC22WG5.3918) [MPI3 Fortran] (SC22WG5.3909) [ukfortran] [MPI3 Fortran] MPI non-blocking transfers

Van Snyder Van.Snyder
Fri Jan 23 16:46:43 EST 2009


On Fri, 2009-01-23 at 12:44 -0800, Bill Long wrote:

> The problem is solved for asynchronous I/O in Fortran by exactly the
> same means.  The solution relies on C1239 and C1240 [299:12-19] in
> 09-007.  If you violate either of these constraints, your program will
> be rejected by the compiler and you have to fix it....  If we plan to
> base a solution on asynchronous (or volatile) for MPI, then the
> solution is exactly the same as for asynchronous I/O in Fortran.

Since solutions exist for asynchronous Fortran I/O, which is
indistinguishable from MPI hiding in an external procedure, why are we
having this conversation?

Just write an extra layer to get at MPI_wait, having an assumed-shape
buffer dummy argument with the ASYNCHRONOUS attribute and without the
CONTIGUOUS attribute.  Since the buffer dummy won't be referenced, make
the procedure external, or better yet, write it in C, so a clever
optimizer won't inline it, notice the buffer isn't used, and then move
accesses to the buffer across the remaining reference to MPI_wait, which
is the present problem.

Make this layer generic with MPI_wait.  Tell users to put the buffer or
something associated with it, if one is in scope, as the associated
actual argument, and tell them why.  If no buffer with which the "real"
buffer might be associated is accessible in the scope of the call to
MPI_wait, the processor can't possibly affect the buffer by moving code
that accesses it across the call to MPI_wait.  This is actually *more
precise* than a Fortran WAIT statement, because processors can't move
acceses to ANY asynchronous variable across a WAIT statement.

Since we already have a solution, I don't see the need for another one,
especially not a means to say "this procedure effectively contains a
WAIT statement, so don't move accesses to anything with the ASYNCHRONOUS
attribute across references to it."  If you want to accomplish that
effect, just put a WAIT statement, maybe for a nonexistent unit (see
09-007:233:12-14), before and after the call to MPI_wait.  Maybe add
Note 9.52a "An optimizer cannot move accesses to any variables with the
ASYNCHRONOUS attribute across a WAIT statement, even one for a
nonexistent unit."

Besides, Resolution T7 says "don't invent new stuff."





More information about the J3 mailing list