(j3.2006) (SC22WG5.3641) [ukfortran] [Fwd: Preparing for the Tokyo meeting]

Keith Bierman khbkhb
Thu Nov 6 15:50:43 EST 2008


On Nov 6, 2008, at 1:26 PM, N.M. Maclaren wrote:

>
> until 2006.

Things have evolved a lot faster in OpenSolaris since.


> Your statement is true about "give this process N cores
> or don't run it", but is NOT about "restrict this program to x% of the
> cache or TLB entries"

True; I was focusing on the N cores not the subcore facilities. Even  
where a chip might have such a thing, it's hardly ever in an exposed  
API that application programmers should count on.
> ...
> The difference between theory and practice is less in theory than  
> it is
> in practice.
True.
> ...Linux is no different.  The solution is usually the one  
> mentioned above
> (i.e. give the gang-scheduled software enough cores that you don't  
> cause
> conflicts with other applications).

And keep the jobs that need gang scheduling on a "machine" (virtual  
perhaps) of their own with a batch scheduler and no interactive jobs.

>
>> No doubt many institutions don't segregate jobstreams in a sensible
>> fashion. And doubtless there are reasons for their behaviors (good,
>> bad, non-technical, etc.) but precisely how does that translate into
>> what should be part of a Standard?
>
> Because the standard should not assume such segregation!

I disagree. As far as I can tell, nearly all programming language  
standards make the implicit assumption that they own the "machine".  
It's is the OS's job to make that illusion real enough (except where  
some "volatile" asynch service is taking place that the application  
wishes to consume ;>

>
> I couldn't care less if unsegregated codes are likely to run like  
> drains;
> that is what any experienced user will expect.  But the standard  
> shouldn't
> rely on particular system configurations to ensure that conforming
> programs complete in the absence of resource limitations.  In  
> particular,
> it should NOT assume them in its examples without saying so  
> explicitly!

We also assume that disks work (data written to disk eventually gets  
there or if it doesn't it's not the Standard's concern) etc. While I  
sympathize with your pain as a sysadmin, I don't think that the  
Standard can or should delve into such matters. There may well be  
machine configurations which cannot safely run programs that use this  
feature. Sadly, such machine may be so cheap that people will do it  
despite the indeterminate results. But that's a question of building  
(or buying ;>) good systems.
> ...
> Because many coarray programs won't work if that is done, and  
> reasonable
> users want to know if they are risking coarray-induced failure by  
> using
> such a system.

They have to match their code to the system (or the system to their  
code), one way or another.


>
>
> Oh, yes.  I don't disagree that many of those count as vector  
> processors,
> nor that some future SSE replacement may also do so.  But, today,  
> it's got
> very different properties.

But pretty much the same compiler innards; just with very short  
vector registers ;> As far as I know, no compiler treats them like  
they do regular registers as part of instruction instruction and  
such. Bill and Jim have commented for Cray and IBM. As I am currently  
unaffiliated, I won't claim to be speaking for any vendor  
implementation ;>

Coarrays should be a lot easier to deal with (code for, port, etc.)  
than MPI. MPI has proved useful across a variety of implementations  
despite having a weak foundation. Coarrays are a step forward. They  
probably aren't the last step; but it will be years before we get  
"there" and we probably won't if we don't make stepwise improvements ;>


-- 
Keith H. Bierman   khbkhb at gmail.com      | AIM kbiermank
5430 Nassau Circle East                  |
Cherry Hills Village, CO 80113           | 303-997-2749
<speaking for myself*> Copyright 2008







More information about the J3 mailing list