(j3.2006) (SC22WG5.4042) [ukfortran] LOCK/UNLOCK question

Bill Long longb
Thu Jun 25 11:37:42 EDT 2009



N.M. Maclaren wrote:

> 
> S-3901-60 makes no reference to the use of VOLATILE for atomicity, which
> is precisely why I included it there.  Furthermore, I specifically
> looked for VOLATILE coarray support on your Web pages when writing N1754
> (in October 2008), and the latest language reference manual that I could
> find did not mention it.
> 
> What version of the manual WAS it first documented in?

Our current documentation policy is that the manuals mention only 
differences between our implementation and the standard. Since 6.0 in 
2007, we've used Fortran 2003 as the base standard for the document. 
Since VOLATILE is part of the standard, it is not mentioned in the 
documentation.  Ironically, when Fortran 2008 is published and we switch 
to that as the base, we will need to start documenting that off-image 
references to VOLATILE coarrays are exempted from the segment ordering 
rules as an extension.



> 
> A more directly relevant issue is the number of parallel programming
> paradigms that have failed to take off because users couldn't debug
> their code (especially up to the level of portability).  To a great
> extent, that applies to all of ScaLAPACK, HPF, OpenMP and POSIX threads,
> plus a multitude of less widespread paradigms; HPF had performance
> problems as well, of course.  

I'd argue that in the case of ScaLAPACK and HPF, the difficulty of 
understanding how to write the program in the first place was a bigger 
cause of failure than debugging. By comparison, I think OpenMP has been 
widely successful for local (i.e. within image) shared-memory 
parallelism.  The advent of multi-core chips has increased its usage.
While there are some performance issues, and serious scaling 
limitations, I think the OpenMP crowd can feel pretty good about its 
acceptance. It's in enough benchmarks that vendors are essentially 
forced to support it.



It has even affected MPI non-blocking
> transfers; I have no feel for what has prevented MPI one-sided transfers
> from taking off.
> 

I suspect it is partly inertia. Some who need one-sided transfers 
already have access to coarrays or UPC which are a lot easier to code 
than MPI. Others have been using Shmem for years, and see no point in 
converting.

Cheers,
Bill



-- 
Bill Long                                   longb at cray.com
Fortran Technical Support    &              voice: 651-605-9024
Bioinformatics Software Development         fax:   651-605-9142
Cray Inc., 1340 Mendota Heights Rd., Mendota Heights, MN, 55120





More information about the J3 mailing list