(j3.2006) [MPI3 Fortran] Feedback from Fortran J3 meeting

dick.hendrickson at att.net dick.hendrickson
Tue May 27 17:43:51 EDT 2008


 Wouldn't it be better to specify NOCOPYINOCOPYOUT if that's
what you need?  Relying on a side effect of a declaration is bad form
at best.  Among other things, it requires the user to specify VOLATILE
in any other routines that use the variables while they are being MPId.
True, lots of things require consistent declarations between caller and
callee.  But, VOLATILE is such a weird, hardware dependent, duck 
that someone maintaining the callee next year might delete as an 
optimization aid or because he doesn't understand it.

It's better human engineering to make the declaration more or less match
the intent and need.

Dick Hendrickson

  -------------- Original message from Bill Long <longb at cray.com>: --------------


> Just a reminder that specifying VOLATILE for the actual and 
> corresponding dummy arguments is only to prevent copy-in/copy-out of the 
> argument to the MPI routine.  Preventing copy-in/copy-out is the goal 
> here. The other, normal aspects of VOLATILE are not relevant for this 
> example.   ASYNCHRONOUS has the same effect, but the defect that a WAIT 
> statement could change its effect.
> 
> Cheers,
> Bill
> 
> 
> Craig Rasmussen wrote:
> > This may not be as bad as it seems as VOLATILE can be limited to a  
> > BLOCK construct (sorry, it's just BLOCK not BEGIN BLOCK as I have  
> > below).  Since the caller won't reference the memory between MPI_Irecv  
> > and MPI_Wait, this shouldn't be a problem for VOLATILE on the actual.   
> > Plus, since the implementation will likely be in C (which can ignore  
> > the VOLATILE dummy), there shouldn't be a performance hit on the  
> > callee either.
> >
> > I was skeptical about this as well, but Aleks reminded me that the  
> > effects of VOLATILE could be limited to a BLOCK construct.
> >
> > Cheers,
> > Craig
> >
> >
> > On May 27, 2008, at 9:55 AM, Dan Nagle wrote:
> >
> >   
> >> Hi,
> >>
> >> I think we blundered with 08-185r1.
> >>
> >> Specifically, VOLATILE is exactly the wrong attribute.
> >> VOLATILE means "always go to memory" for this variable.
> >> The MPI specification says "don't touch the buf memory" between
> >> the MPI async send/recv and the MPI wait.  (The other actual args
> >> may be treated ordinarily.)
> >>
> >> What Fortran should say is "between these two subroutine references,
> >> don't read/write this actual argument" and I don't think we currently
> >> have a way to say that.  :-(
> >>
> >> It's not so much that VOLATILE is too big a hammer,
> >> it's that a hammer is exactly the wrong tool for this task.
> >>
> >> So I think we talked ourselves into a blunder with this one.
> >> Never design things quickly.  :-)
> >>
> >> On May 27, 2008, at 11:43 AM, Craig Rasmussen wrote:
> >>
> >>     
> >>> Two weeks ago I attended the Fortran J3 standards meeting where I
> >>> discussed with them the issues surrounding new Fortran MPI
> >>> bindings.  They were very receptive to our needs and instructed me
> >>> to write a J3 paper in response (attached).  In summary, J3 will try
> >>> to get changes made in the Fortran standard so that we won't need to
> >>> use CLOC(buffer) for a void* buffer argument.  J3 still hasn't
> >>> decided the best way to do this but the likely favorite is a new
> >>> type, TYPE(*), as an interoperable type with void*.
> >>>
> >>> J3 also decided that the way to limit copyin/copyout semantics and
> >>> code motion in asynchronous MPI calls is to use the volatile
> >>> attribute on both Fortran actual and dummy arguments.  The
> >>> performance effects of volatile could be limited with the use of the
> >>> new F2008 block construct.  For example,
> >>>
> >>>
> >>>   real, dimension(100) :: buffer
> >>>
> >>>   BEGIN BLOCK
> >>>       VOLATILE :: buf
> >>>       err = MPI_Irecv(buf, ..., req)
> >>>       .
> >>>       .
> >>>       err = MPI_Wait(req, ...)
> >>>   END BLOCK
> >>>
> >>> The interface for MPI_Irecv would have something like
> >>>
> >>>   TYPE(*), volatile :: buf
> >>>
> >>> Cheers,
> >>> Craig
> >>>
> >>> <08-185r1.txt>
> >>>
> >>>
> >>> _______________________________________________
> >>> mpi3-fortran mailing list
> >>> mpi3-fortran at lists.mpi-forum.org
> >>> http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-fortran
> >>>       
> >> -- 
> >> Cheers!
> >>
> >> Dan Nagle
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> J3 mailing list
> >> J3 at j3-fortran.org
> >> http://j3-fortran.org/mailman/listinfo/j3
> >>     
> >
> > _______________________________________________
> > J3 mailing list
> > J3 at j3-fortran.org
> > http://j3-fortran.org/mailman/listinfo/j3
> >   
> 
> -- 
> Bill Long                                   longb at cray.com
> Fortran Technical Support    &              voice: 651-605-9024
> Bioinformatics Software Development         fax:   651-605-9142
> Cray Inc., 1340 Mendota Heights Rd., Mendota Heights, MN, 55120
> 
>             
> 
> _______________________________________________
> J3 mailing list
> J3 at j3-fortran.org
> http://j3-fortran.org/mailman/listinfo/j3  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://j3-fortran.org/pipermail/j3/attachments/20080527/80e9dc7e/attachment.html 



More information about the J3 mailing list