(j3.2006) (SC22WG5.5739) Units of measure

Van Snyder Van.Snyder
Wed Jun 29 16:58:46 EDT 2016

On Wed, 2016-06-29 at 12:35 +0000, Bill Long wrote:
> The implementation costs greatly outweigh the benefit in this case,
> and vendors are not awash with free resources for such a project. 

I'm not convinced about the relationship of cost to benefit.  Remember
that the cost is incurred once by vendors, and the benefit is enjoyed
continuously thereafter by the vendors' thousands of customers.  So the
question is "what would be the additional cost per customer?"  Has any
vendor asked its customers "If the language and processor provided
automatic units checking and conversion, would that be worth
$n/license/year?"  Indeed has any vendor ever asked any customer any
such question about any proposed language feature?  I've never been

I spend at least two weeks per year chasing problems related to units,
usually when somebody gives me a module to incorporate into my software,
for example a hydrostatic equilibrium model with arguments named
"height," "latitude," and "pressure."  Is the height geodetic or
geocentric?  Is it meters or kilometers?  Is the latitude geodetic or
geocentric?  Radians or degrees?  Is the pressure bars, millibars,
hectopascals, pascals, mmHg, PSIG, or PSIA?  I can't imagine that I'm
the only guy in the world who has this problem.

Assume the average software engineer earns $2000/wk.  With 100% overhead
for office space, electricity, management, computer, ..., that comes to
$4000/wk, or at least $8000/yr.

In addition to the day-to-day cost, drip-drip-drip eroding my budget for
useful work, there was that one $300 million problem that could have
been caught or corrected automatically, which we would prefer not to

I'd gladly spend $100/yr more for my Fortran compiler license to avoid
those costs.  But I doubt it would need to be that much.

I'm also not convinced by handwaving arguments about "enormous
implementation costs."

The namespace rules are adjusted to allow a unit name and the generic
name of its conversion, confirmation, and coercion functions to be the
same.  Thereby, wherever the unit name is accessible, so are its
associated functions.  This seems like a pretty trivial thing, certainly
no more difficult than allowing a type name and a generic name to be the

A unit definition creates a few functions, and connects them to a
generic identifier of the same name.  In the case of conversion
functions, it generates the inverse function, and connects it to the
generic identifier of the related unit.  These functions are of the form
f(x) = a*x+b, so this seems to be pretty trivial.

Units of real variables and named constants should be treated internally
as another kind type parameter.  We already know how to handle multiple
type parameters.  Generic resolution and units checking for argument
association then fall out automatically.  It's not described this way in
the TS because there would be no numerical value for the "units kind
type parameter," and working around that would require more explanation
than simply defining TKRU compatibility.

Expressions are a bit more work, but not tremendously more.  For
addition, subtraction, and assignment, units must match, and remain the
same.  For multiplication, division, exponentiation by an integer
constant, and exponentiation by a rational constant using the
RATIONAL_POWER intrinsic function, units are computed, but this involves
trivial methods of symbolic algebra that were mastered (in software)
more than fifty years ago, and described by Fourier about 190 years ago.

This leave I/O.  The "front end" that processes the "little language"
that specifies the configuration for my code checks and converts units
automatically.  It wasn't terribly difficult.  We don't produce units in
formatted output, but that doesn't seem like a difficult problem either.
If worse comes to worst, the processor could internally define a type of
the same name as the unit, with real components of all kinds the
processor supports, and use that type along with defined I/O, for units
checking on input and units generation on output.  But that seems like a
larger than necessary hammer.

Several developers of preprocessors and analysis products have told me
this is not a difficult problem.  We don't want to be tied to a
commercial product because we have other users of our software who might
not want (or be able) to afford the cost of the processor.  I suspect
there are others in the same boat.  If there were a GNU preprocessor,
that might make a tiny difference.

Have I missed something?

More information about the J3 mailing list