(j3.2006) (SC22WG5.5561) LCPC conference in Raleigh
Bill Long
longb
Tue Sep 15 09:37:16 EDT 2015
Hi Van,
Thanks for the detailed report on this meeting. Interesting.
On Sep 14, 2015, at 9:30 PM, Van Snyder <Van.Snyder at jpl.nasa.gov> wrote:
> I attended the 28th Languages and Compilers for Parallel Computing (LCPC
> 2015) conference http://www.csc2.ncsu.edu/workshops/lcpc2015/ in Raleigh
> last week. This wasn't funded by JPL; a friend who is the chair of the
> Computer Science department at NCSU paid my way.
>
> I stood beside Damian's poster about OpenCoarrays, and gave a 2-minute
> presentation about it.
>
> There were several interesting talks. Several times I mentioned that
> what presenters were describing as rather heroic efforts are automatic
> in coarray Fortran.
>
> Several people asked "Why coarrays?" I pointed to one line of code,
> something like X = Y[7] on Damian's poster, and asked them to compare it
> to the page of MPI code on the next poster.
Economy and clarity of the syntax is one of the big advantages over MPI (the assembly language of parallel programming).
>
> That next poster was presented by Hadia Ahmed <hadia at cis.uab.edu>,
> concerning work she had done with Anthony Skjellum and Peter Pirkelbauer
> at the University of Alabama at Birmingham. They used ROSE to analyze
> legacy C+MPI code having blocking transfers, and transformed it to use
> non-blocking transfers. Their analysis might be useful in Fortran
> compilers to decide when coarray transfers can be non-blocking, and to
Compilers already schedule transfers as non-blocking if they cannot be done with direct addressing. Informative messages about improving code sequences are always an option. ROSE does not have information beyond what is already available in the internal data structures created during execution of a real compiler.
> report what users have done wrong that prevents that. This project is
> called PETAL. There should be something in the LCPC 2015 proceedings
> when they appear online. There's a poster about the parent project,
> called iProgress, from a different (IPDPS) meeting last year, at
> http://iprogress.cis.uab.edu/media/2014/05/ipdps_poster_final.pdf.
>
> I don't know ROSE http://rosecompiler.org well, but another attendee
> mentioned that it can parse C, C++, UPC, Fortran 2003, Java, Python and
> PHP, and that it represents them using a common abstract syntax tree.
> It can also un-parse (pretty print). If ROSE were extended to
> understand coarrays (I sent them a message to urge them to do it), the
ROSE, MPI, and OpenMP all need to be extended to understand Fortran 2008 in general. There is always a lag.
> work that Hadia did might be useful to transform Fortran+MPI to coarray
Converting existing MPI codes to use coarrays is an interesting prospect. Although in practice, the coarray version might use a different communications strategy.
> Fortran. If ROSE can unparse to a different language from its input, it
> could conceivably convert C+MPI code to coarray Fortran.
>
> Many presentations described unstructured problems, such as arbitrary
> mesh representations in Earth science or fluid dynamics calculations.
> Paul Kelly <p.kelly at imperial.ac.uk> from Imperial College London
> described work he had done in this area. He developed what he calls an
> inspector-executor framework, which looks hard at a problem to develop
> efficient schedules to solve it. This is effective if one's code
> examines the same mesh many times. His tool is embedded in Python. It
> generates and compiles C code at runtime. The performance frequently
> exceeds hand-coded C. Other speakers described graph problems (which
> are isomorphic to sparse matrix problems). These sorts of problems
> don't lend themselves well to parallelization using coarrays, array
> operations, FORALL statements, or DO CONCURRENT constructs because the
> parallelization opportunities arise from the data, not the program
> structure. It would be useful to contemplate more parallelism
> constructs in Fortran to address them. Many speakers remarked that
> multigrain parallelism gives greater speed-up. Some speakers mentioned
> fork-join constructs. Others mentioned tasks and threads (I don't know
> what distinctions they drew between these). Somebody mentioned futures.
My experience is biased by SLURM, but I usually assume task -> image, and thread -> SMP thread within an image to support local parallelism (OpenMP, DO CONCURRENT, Async I/O, ?).
I think Futures are part of X10 from IBM.
> Padma Raghavan <padma at psu.edu> from Penn State University mentioned maps
> (I don't know what she had in mind, but she did invite me to contact her
> offline; maybe she said map reduce).
>
> Dhruva Chakrabarti <dhruva.chakrabarti at hpe.com> mentioned that effective
> use of persistent store (he mentioned at least three technologies, but I
> only remember memresistors and MRAM) provides challenges for programming
> languages. It might be fruitful to think about the relationship between
> persistent store and Fortran. One tiny step might be to support what
> Multics called "associated memory" and POSIX calls a "memory-mapped
> file" (which is already supported by Boost, Java, D, Ruby, Python,
> Perl, .NET, PHP, R, J, and probably others).
>
> One of the conference organizers, Chen Ding <cding at cs.rochester.edu> at
> the University of Rochester http://www.cs.rochester.edu/~cding/, hoped I
> would come to the meeting next year (in Rochester). I see no prospect
> to get funding to attend further meetings. It would be useful if
> somebody represented Fortran at these meetings (and also at PPoPP
> meetings).
You seemed to have managed without JPL support this time. OTOH, Rochester is a drivable from New Hampshire, in case one of our gang from there wanted to go.
Cheers,
Bill
>
> Van
>
>
> _______________________________________________
> J3 mailing list
> J3 at mailman.j3-fortran.org
> http://mailman.j3-fortran.org/mailman/listinfo/j3
Bill Long longb at cray.com
Fortran Technical Support & voice: 651-605-9024
Bioinformatics Software Development fax: 651-605-9142
Cray Inc./ Cray Plaza, Suite 210/ 380 Jackson St./ St. Paul, MN 55101
More information about the J3
mailing list