(j3.2006) (SC22WG5.3889) Fwd: BOUNCE sc22wg5 at open-std.org: Non-member submission from [Jeff Squyres <jsquyres at cisco.com>]

David Muxworthy d.muxworthy
Thu Jan 22 11:49:56 EST 2009


Return-Path: <jsquyres at cisco.com>
X-Original-To: sc22wg5 at open-std.org
Delivered-To: sc22wg5 at open-std.org
Received: from rtp-iport-1.cisco.com (rtp-iport-1.cisco.com  
[64.102.122.148])
	by www2.open-std.org (Postfix) with ESMTP id 65AE4CA3439
	for <sc22wg5 at open-std.org>; Thu, 22 Jan 2009 17:14:33 +0100 (CET)
X-IronPort-AV: E=Sophos;i="4.37,307,1231113600";
    d="scan'208";a="34581800"
Received: from rtp-dkim-2.cisco.com ([64.102.121.159])
   by rtp-iport-1.cisco.com with ESMTP; 22 Jan 2009 16:14:16 +0000
Received: from rtp-core-2.cisco.com (rtp-core-2.cisco.com  
[64.102.124.13])
	by rtp-dkim-2.cisco.com (8.12.11/8.12.11) with ESMTP id n0MGEHBU026228;
	Thu, 22 Jan 2009 11:14:17 -0500
Received: from xbh-rtp-211.amer.cisco.com (xbh-rtp-211.cisco.com  
[64.102.31.102])
	by rtp-core-2.cisco.com (8.13.8/8.13.8) with ESMTP id n0MGEGXl004933;
	Thu, 22 Jan 2009 16:14:16 GMT
Received: from xfe-rtp-202.amer.cisco.com ([64.102.31.21]) by xbh- 
rtp-211.amer.cisco.com with Microsoft SMTPSVC(6.0.3790.1830);
	 Thu, 22 Jan 2009 11:14:16 -0500
Received: from rtp-jsquyres-8711.cisco.com ([10.116.19.194]) by xfe- 
rtp-202.amer.cisco.com with Microsoft SMTPSVC(6.0.3790.1830);
	 Thu, 22 Jan 2009 11:14:15 -0500
Cc: fortran standards email list for J3 <j3 at j3-fortran.org>,
	WG5 <sc22wg5 at open-std.org>, Van.Snyder at jpl.nasa.gov
Message-Id: <B31323B7-45BE-45D7-905E-1E5ED09F3CD9 at cisco.com>
From: Jeff Squyres <jsquyres at cisco.com>
To: longb at cray.com,
	MPI-3 Fortran working group <mpi3-fortran at lists.mpi-forum.org>
In-Reply-To: <49789AC3.4010203 at cray.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Content-Transfer-Encoding: 7bit
Mime-Version: 1.0 (Apple Message framework v930.3)
Subject: Re: [MPI3 Fortran] (j3.2006) (SC22WG5.3886) MPI non-blocking  
transfers
Date: Thu, 22 Jan 2009 11:14:13 -0500
References: <Prayer.1.3.1.0901211104060.5654 at hermes-2.csi.cam.ac.uk>	 
<49776DF7.1040900 at cray.com>	<20090121211748.130A5C178D9 at www2.open- 
std.org>	<20090121224014.6CB63C178D9 at www2.open-std.org>	 
<20090121234200.4F3BDCA3434 at www2.open-std.org>	 
<20090122000407.D5A8ECA3434 at www2.open-std.org>  
<20090122100652.C31E9CA3434 at www2.open-std.org>  
<49789AC3.4010203 at cray.com>
X-Mailer: Apple Mail (2.930.3)
X-OriginalArrivalTime: 22 Jan 2009 16:14:16.0411 (UTC) FILETIME= 
[791072B0:01C97CAC]
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; l=1077; t=1232640857;  
x=1233504857;
	c=relaxed/simple; s=rtpdkim2001;
	h=Content-Type:From:Subject:Content-Transfer-Encoding:MIME-Version;
	d=cisco.com; i=jsquyres at cisco.com;
	z=From:=20Jeff=20Squyres=20<jsquyres at cisco.com>
	|Subject:=20Re=3A=20[MPI3=20Fortran]=20(j3.2006)=20(SC22WG5
	.3886)=20[ukfortran]=09[MPI3=09Fortran]=09MPI=09non-blocking
	=20transfers
	|Sender:=20
	|To:=20longb at cray.com,=0A=20=20=20=20=20=20=20=20MPI-3=20Fo
	rtran=20working=20group=20<mpi3-fortran at lists.mpi-forum.org>;
	bh=aHxq9AQAHz285168IFvzeG7SYmhyz8h7WhdcreQGoVo=;
	b=o3xcmMMPK1r7ukEJeezA6oq9pcis29Tmamjx3B7s+svvVbBGjT0hmx22av
	AAhdoWe045SmLUiPJ6mcMTtWV0IjBPBtUNpWDlR3tl+u4diIKn50K8m+Gcuy
	dM94tBKNAp;
Authentication-Results: rtp-dkim-2; header.From=jsquyres at cisco.com;  
dkim=pass (
	sig from cisco.com/rtpdkim2001 verified; );

On Jan 22, 2009, at 11:11 AM, Bill Long wrote:

 >>> One simple step toward solving the problem is to write an extra
 >>> interface layer that includes the argument, which then calls the
 >>> real
 >>> MPI wait routine, not passing that argument.  Then declaring the
 >>> buffer
 >>> and the dummy argument of the wait routine interface layer to be
 >>> ASYNCHRONOUS ought to work, according to the rules we already have
 >>> in
 >>> place for Fortran asynchronous I/O.
 >
 > The basic problem with adding a buffer argument to a variant of the
 > MPI_wait routines is that the buffer variable may not be accessible
 > in the scoping unit of the call.  This seems like a fatal flaw with
 > this approach.


It's also a problem for the array variants of MPI_TEST and MPI_WAIT
(that take variable-length arrays of requests).

Also remember that MPI_TEST may or may not complete the request; just
calling MPI_TEST (or any of its array variants) does not guarantee
that the buffer is "owned" by the application again.

-- 
Jeff Squyres
Cisco Systems




More information about the J3 mailing list