(j3.2006) (SC22WG5.3639) [ukfortran] [Fwd: Preparing for the Tokyo meeting]

N.M. Maclaren nmm1
Thu Nov 6 15:26:28 EST 2008


On Nov 6 2008, Keith Bierman wrote:
>>
>> Firstly, most systems have no way to prevent one job from hogging  
>> resources
>> like memory, cache, TLB entries, and so on.  There are rarely any  
>> facilities
>> to say "give this process N cores or don't run it" or "restrict  
>> this program
>> to x% of the cache or TLB entries".
>
>Most mainframes had very mature facilities. Some Unix operating  
>environments have reasonably good ones (google solaris resource  
>management containers).

Don't need to!  I managed a cluster of 9 SunFire F15Ks, used for HPC,
until 2006.  Your statement is true about "give this process N cores
or don't run it", but is NOT about "restrict this program to x% of the
cache or TLB entries"

The reason for that is that the hardware provides no such functionality
(yes, I know that PA-RISC did, though I don't know anything that used it).
If two user threads are sharing a CPU/core, you can't subdivide the cache
or TLBs on most architectures.

>> Secondly, the thread schedulers are almost always optimised for  
>> interactive (GUI) work,
>
>Tunable on the better systems.

The difference between theory and practice is less in theory than it is
in practice.

Don't bet on it.  I have tried that on several systems, including IRIX,
AIX and Solaris, and it was never feasible.  Reliable sources tell me that
Linux is no different.  The solution is usually the one mentioned above
(i.e. give the gang-scheduled software enough cores that you don't cause
conflicts with other applications).

>No doubt many institutions don't segregate jobstreams in a sensible  
>fashion. And doubtless there are reasons for their behaviors (good,  
>bad, non-technical, etc.) but precisely how does that translate into  
>what should be part of a Standard?

Because the standard should not assume such segregation!

I couldn't care less if unsegregated codes are likely to run like drains;
that is what any experienced user will expect.  But the standard shouldn't
rely on particular system configurations to ensure that conforming
programs complete in the absence of resource limitations.  In particular,
it should NOT assume them in its examples without saying so explicitly!

>It should be possible to configure the runtime so that only N images  
>can run ... and N could be 1. While that might not provide the  
>clearest programming style, what's the issue for a Standard?

Because many coarray programs won't work if that is done, and reasonable
users want to know if they are risking coarray-induced failure by using
such a system.

>> ....    b) Because of that, SSE optimisations are handled by all  
>> compilers in
>> simi
>
>No doubt more to come in the not so distant future. Indeed, some  
>people have noticed that today's graphic processors are attached  
>processors in the rough style of the FPS array processors of  
>yesteryear. ...

Oh, yes.  I don't disagree that many of those count as vector processors,
nor that some future SSE replacement may also do so.  But, today, it's got
very different properties.


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email:  nmm1 at cam.ac.uk
Tel.:  +44 1223 334761    Fax:  +44 1223 334679




More information about the J3 mailing list