(j3.2006) Another application for coroutines

Craig Dedo craig
Sat Jan 30 19:06:59 EST 2010

Van and Everyone Else:

            Unfortunately for Van, this particular situation does not seem
to make a strong argument for coroutines.  To me, it does not seem
worthwhile to develop a complex new feature when good quality solutions are
already available, using features that are already in the language.


            It seems to me that you have already hit upon the best solution.
Use allocatable arrays and put them into a module.  If the program is not
straining the memory resources of the computer you are using, then also use
the SAVE attribute.  In order to avoid the problem of mis-matched
allocations and deallocations, write a procedure to do all of the
allocations at once and a second procedure to do all of the deallocations at
once.  Properly written, they should be mirror images of each other and it
should be easy to compare whether an array is mentioned in one and not in
the other.


            Yes, a procedure of 3784 source lines of code (SLOC) is overdue
for refactoring.  In fact, a procedure of this length is almost begging for
the presence of several (or many) serious defects.  The best research shows
that the defect rate (i.e., # defects / KSLOC) is constant below 200
executable statements per procedure.  The defect rate increases linearly
above 200 executable statements per procedure.  Thus, a 1 KSLOC procedure
has a defect rate 5 times that of a 200 SLOC procedure, a 2 KSLOC procedure
10 times, and a 3.8 KSLOC procedure 19 times.


            So for this reason alone, refactoring is very strongly


            Then there is the issue of human understandability.  A procedure
of 3784 SLOC is highly unlikely to be understood by mere mortals.  Even a
super-genius would have a hard time understanding all of it.  The best
research shows that understanding is at its best at 200 SLOC or less.  As
length increases over 200 SLOC, understanding goes down.  You might notice a
similarity with what I wrote 2 paragraphs back.  Yes, understanding and
defect rates are inversely and strongly correlated.  As understanding goes
down, defect rates go up.


            There is a second understandability issue.  You mention an
"already-long calling sequence".  This suggests that you have a very lengthy
argument list.  If there are more than 7 arguments in the argument list,
then understanding will also go down.  This is because human short-term
memory is limited to around 7 items, i.e., the average person can keep track
of 7 items at one time.  Above that, confusion starts to set in and
understanding starts to degrade.  Another good place for using module data.


            Hope this helps.  Please let me know how things work out.
Please feel free to contact me at any time with any questions or concerns
that you may have.  I am looking forward to hearing from you soon.



Craig T. Dedo

17130 W. Burleigh Place

P. O. Box 423                         Mobile Phone:  (414) 412-5869

Brookfield, WI   53008-0423    E-mail:  <craig at ctdedo.com>


Linked-In:  http://www.linkedin.com/in/craigdedo


-----Original Message-----
From: j3-bounces at j3-fortran.org [mailto:j3-bounces at j3-fortran.org] On Behalf
Of Van Snyder
Sent: Friday, January 29, 2010 15:17
To: j3
Subject: (j3.2006) Another application for coroutines


During the 2008 requirements phase, I advocated coroutines to ease "reverse
communication" in library codes that need access to user code.


I've recently encountered another circumstance where a coroutine would be


The program of my current responsibility integrates the radiative transfer
equation through the limb of the Earth's atmosphere, from space to our
instrument.  The instrument's antenna points at about 70 angles over a
period of about 26 seconds.  Therefore, each path through the atmosphere is
of a different length.


The procedure that does the integration has about three dozen arrays with at
least one dimension that depends upon the path length, and upon the
discretization of the path (which is determined by input data).


To avoid a trip to the allocator (and deallocator) for each pointing, we
allocate these arrays once for each scan, for the path of longest length, as
automatic arrays.


For a variety of reasons, only a few having to do with these arrays, this
subroutine has become enormous (3784 lines).  I'd like to break it up.  So
as not to add three dozen arrays to an already-long calling sequence, I'd
like to continue to access them either locally or through host association.


An obvious way to handle this is to convert them from automatic to
allocatable, either put them in a module or make them save variables, and
add the appropriate explicit allocations and deallocations.  This risks
failing to deallocate a new one someday, if I have to add one (which I've
had to do several times).


One advantage of automatic variables, compared to allocatable ones, is that
you can't forget to deallocate them (which I would have to do to get rid of
them if they're save variables).  Another potential advantage is that the
processor might make one trip to the allocator to get them all in one gulp,
then calculate its descriptors.  I don't know whether this actually would be
an advantage, or if so whether any processor exploits it.


If these variables were automatic variables in a coroutine, it could be
called to get them created and to do some preliminary calculations necessary
for all paths.  Then it would suspend instead of returning, thereby
preserving its activation record.  It would be resumed for each path, and
would finally return after the last path, at which time the arrays would be



J3 mailing list

J3 at j3-fortran.org



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://j3-fortran.org/pipermail/j3/attachments/20100130/ca9bfe6c/attachment.htm>

More information about the J3 mailing list