(j3.2006) Preparing for the Tokyo meeting
Wed Nov 5 16:58:01 EST 2008
On Wed, 2008-11-05 at 12:15 -0800, John Wallin wrote:
> Here is a simple example of an MPI coding horror. I have to pass a
> Fortran derived type between nodes.
> Here is the code I need to use to even make this exchange possible.
> Please note that a lot of code has been deleted so I don't overwhelm
> everyone's email.[...]
> You might wonder what the 37 is the sphblock_counts. It is the number
> of double precision numbers in a row within this particular data
> The important thing to note is that every time a grad student adds a
> single element to the data structure, you have to alter the block
> counts and sizes by hand. This leads to huge problems debugging and
> maintaining the code if the base structures are modified. (And this
> code is the best way I have found for doing it.)
I remarked on this problem before m185, and proposed a trivial addition
to the OPEN statement to allow message passing using I/O statements,
which already know how to do DTIO and asynchronous, in 08-204. Subgroup
didn't even consider it.
If coarrays are kicked off the train in Tokyo, we really should go back
and look at the directions proposed in 08-204 and 08-205. At least for
the basic functionality provided by MPI, Fortran I/O applied to message
passing should be far clearer.
Within a single SMP, say a dual quad-core PC, one can already accomplish
what's proposed in 08-204 with a pipe, but I haven't met a system yet
where pipes work across NFS. 08-204 provides syntax for users to hook
to stuff that vendors provide that go beyond NFS. 08-205 provides a way
for users to roll their own.
Van Snyder | What fraction of Americans believe
Van.Snyder at jpl.nasa.gov | Wrestling is real and NASA is fake?
Any alleged opinions are my own and have not been approved or
disapproved by JPL, CalTech, NASA, the President, or anybody else.
More information about the J3