2007-06-05 14:43:58

by Richard Purdie

[permalink] [raw]
Subject: Problems (a bug?) with UINT_MAX from kernel.h

The kernel uses UINT_MAX defined from kernel.h in a variety of places.

While looking at the behaviour of the LZO code, I noticed it seemed to
think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
why did it think that?

kernel.h says:

#define INT_MAX ((int)(~0U>>1))
#define INT_MIN (-INT_MAX - 1)
#define UINT_MAX (~0U)
#define LONG_MAX ((long)(~0UL>>1))
#define LONG_MIN (-LONG_MAX - 1)
#define ULONG_MAX (~0UL)
#define LLONG_MAX ((long long)(~0ULL>>1))
#define LLONG_MIN (-LLONG_MAX - 1)
#define ULLONG_MAX (~0ULL)

If I try to compile the code fragment below, I see the error:

#define UINT_MAX (~0U)
#if (0xffffffffffffffff == UINT_MAX)
#error argh
#endif

I've tested this on several systems with a variety of gcc versions with
the same result. I've tried various other ways of testing this all with
the same conclusion, UINT_MAX is wrong.

The *LONG* definitions above should work as gcc is forced to a certain
type. Where just 0U is specified, I don't think it will work as intended
as gcc seems to automatically increase the type to fit the value and
avoid truncation ending up with a long long.

If I change the above to:

/* Handle GCC = 3.2 */
#if !defined(__INT_MAX__)
#define INT_MAX 0x7fffffff
#else
#define INT_MAX (__INT_MAX__)
#endif
#define INT_MIN (-INT_MAX - 1)
#define UINT_MAX ((INT_MAX<<1)+1)

I get the expected result of an int being 4 bytes long. Is there a
better solution? Its probably better that whats there now but could
break a machine using gcc 3.2 that doesn't have int size = 4 bytes...

(gcc <= 3.2 doesn't define __INT_MAX__)

Richard



2007-06-05 15:20:27

by John Anthony Kazos Jr.

[permalink] [raw]
Subject: Re: Problems (a bug?) with UINT_MAX from kernel.h

> The kernel uses UINT_MAX defined from kernel.h in a variety of places.
>
> While looking at the behaviour of the LZO code, I noticed it seemed to
> think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
> why did it think that?
>
> kernel.h says:
>
> #define INT_MAX ((int)(~0U>>1))
> #define INT_MIN (-INT_MAX - 1)
> #define UINT_MAX (~0U)
> #define LONG_MAX ((long)(~0UL>>1))
> #define LONG_MIN (-LONG_MAX - 1)
> #define ULONG_MAX (~0UL)
> #define LLONG_MAX ((long long)(~0ULL>>1))
> #define LLONG_MIN (-LLONG_MAX - 1)
> #define ULLONG_MAX (~0ULL)
>
> If I try to compile the code fragment below, I see the error:
>
> #define UINT_MAX (~0U)
> #if (0xffffffffffffffff == UINT_MAX)
> #error argh
> #endif
>
> I've tested this on several systems with a variety of gcc versions with
> the same result. I've tried various other ways of testing this all with
> the same conclusion, UINT_MAX is wrong.
>
> The *LONG* definitions above should work as gcc is forced to a certain
> type. Where just 0U is specified, I don't think it will work as intended
> as gcc seems to automatically increase the type to fit the value and
> avoid truncation ending up with a long long.
>
> If I change the above to:
>
> /* Handle GCC = 3.2 */
> #if !defined(__INT_MAX__)
> #define INT_MAX 0x7fffffff
> #else
> #define INT_MAX (__INT_MAX__)
> #endif
> #define INT_MIN (-INT_MAX - 1)
> #define UINT_MAX ((INT_MAX<<1)+1)

For one thing, the C standard specifies that literals fit into signed
before unsigned if they're not specified as unsigned in the first place.
Shifting signed values (INT_MAX<<1) is wrong on some architectures and
foolish on all. You would have to do "(unsigned int)INT_MAX << 1".

> I get the expected result of an int being 4 bytes long. Is there a
> better solution? Its probably better that whats there now but could
> break a machine using gcc 3.2 that doesn't have int size = 4 bytes...
>
> (gcc <= 3.2 doesn't define __INT_MAX__)

The C standard specifies that a numeric literal constant ending with "U"
may be interpreted as unsigned int, unsigned long int, or unsigned long
long int, in that order, choosing the first in which it will fit. So "~0U"
is correct. If it's not working, I suggest you file a compiler bug/defect
report.

2007-06-05 15:58:00

by Andreas Schwab

[permalink] [raw]
Subject: Re: Problems (a bug?) with UINT_MAX from kernel.h

Richard Purdie <[email protected]> writes:

> If I try to compile the code fragment below, I see the error:
>
> #define UINT_MAX (~0U)
> #if (0xffffffffffffffff == UINT_MAX)
> #error argh
> #endif

The preprocessor computes all expressions with the largest available
range. It does not know anything about types.

Andreas.

--
Andreas Schwab, SuSE Labs, [email protected]
SuSE Linux Products GmbH, Maxfeldstra?e 5, 90409 N?rnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."

2007-06-05 18:36:03

by H. Peter Anvin

[permalink] [raw]
Subject: Re: Problems (a bug?) with UINT_MAX from kernel.h

Richard Purdie wrote:
> The kernel uses UINT_MAX defined from kernel.h in a variety of places.
>
> While looking at the behaviour of the LZO code, I noticed it seemed to
> think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
> why did it think that?
>
> kernel.h says:
>
> #define INT_MAX ((int)(~0U>>1))
> #define INT_MIN (-INT_MAX - 1)
> #define UINT_MAX (~0U)
> #define LONG_MAX ((long)(~0UL>>1))
> #define LONG_MIN (-LONG_MAX - 1)
> #define ULONG_MAX (~0UL)
> #define LLONG_MAX ((long long)(~0ULL>>1))
> #define LLONG_MIN (-LLONG_MAX - 1)
> #define ULLONG_MAX (~0ULL)
>
> If I try to compile the code fragment below, I see the error:
>
> #define UINT_MAX (~0U)
> #if (0xffffffffffffffff == UINT_MAX)
> #error argh
> #endif
>
> I've tested this on several systems with a variety of gcc versions with
> the same result. I've tried various other ways of testing this all with
> the same conclusion, UINT_MAX is wrong.
>

C99 states that all arithmetic in the preprocessor is done in
(u)intmax_t, regardless of prefixes or suffixes.

-hpa