(j3.2006) Fwd: (SC22WG14.11572) What is INT_MIN % -1?
Wed Oct 29 10:21:58 EDT 2008
FYI, here is the original question originating on the WG14 list.
Begin forwarded message:
> From: David Svoboda <svoboda at cert.org>
> Date: October 23, 2008 11:32:44 AM MDT
> To: "sc22wg14 at open-std.org" <sc22wg14 at open-std.org>
> Cc: David Svoboda <svoboda at cert.org>
> Subject: (SC22WG14.11572) What is INT_MIN % -1?
> Does the C standard dictate what INT_MIN % -1 should evaluate to?
> The best citation I can find is section 6.5.5, paragraph 6:
> When integers are divided, the result of the / operator is the
> algebraic quotient with any fractional part discarded.) If the
> quotient a/b is representable, the expression (a/b)*b + a%b shall
> equal a.
> So a reasonable interpretation would be that INT_MIN % -1 = 0.
> But I think another reasonable interpration exists. On
> two's-complement machines, INT_MIN / -1 is mathematically equivalent
> to INT_MAX + 1, which cannot be represented as a signed
> int. Consequently the condition in the second statement in the above
> quotation fails, and so the above quotation leaves INT_MIN % -1 in the
> realm of undefined behavior.
> On MSVC++, taking the modulo of INT_MIN by -1 yields the value 0. But
> on gcc/Linux, taking the modulo of INT_MIN by -1 produces a
> floating-point exception.
> So what is going on here? Is there another section of the C standard
> that defines INT_MIN % -1? Or is one of the above interpretations
> better than the other? Or are both reasonable, and INT_MIN % -1
> really is undefined for two's-complement machines?
> Put another way, is MSVC++ non-complaiant with the standard in this
> facet? Is GCC? Or are they both standards-compliant here?
> David Svoboda <svoboda at cert.org>
> Software Security Engineer
> CERT Coordination Center
> (412) 268-3965
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the J3