Hello,
before a standard was set, every single OS had to come up with its
own fancy fixed-size type definitions such as DWORD, ULONG, u32,
CARD32, u_int32_t and so on.
Since C99, the C language has acquired a standard set of machine
independent types that can be used for machine independent
fixed-width declarations.
Getting rid of all non-ISO types from kernel code could be a
desiderable long-term goal. Besides the inexplicable goodness
of standards compliance, my favourite argument is that not
depending on custom definitions makes copying code from/to
other projects a little easier.
Ok, "int32_t" is a little more typing than "s32_t", but in
exchange you get it syntax hilighted in vim like built-in
types ;-)
I suggest a soft approach: trying to use C99 types as much
as possible for new code and only converting old code to
C99 when it's not too much trouble.
I hope it doesn't turn into an endless flame war... This is
just a polite suggestion.
--
// Bernardo Innocenti - Develer S.r.l., R&D dept.
\X/ http://www.develer.com/
Please don't send Word attachments - http://www.gnu.org/philosophy/no-word-attachments.html
Bernardo Innocenti wrote:
> Hello,
hi,
> Since C99, the C language has acquired a standard set of machine
> independent types that can be used for machine independent
> fixed-width declarations.
>
> Getting rid of all non-ISO types from kernel code could be a
> desiderable long-term goal. Besides the inexplicable goodness
> of standards compliance, my favourite argument is that not
> depending on custom definitions makes copying code from/to
> other projects a little easier.
alpha user space .h define uint64_t as unsigned long,
include/asm-alpha/types.h defines it as unsigned long long.
Using a different definition (if it's possible) will be
confusing. Using the same definition as user space means
than code like:
uint64 t u;
printk("%lu", u);
will not compile on alpha. This problem is solved in C99
by using PRI_xxx format specifier macro, I'm not a great
fan of this idea.
> Ok, "int32_t" is a little more typing than "s32_t", but in
> exchange you get it syntax hilighted in vim like built-in
> types ;-)
surely vim allow to define your own set of type ?
regards,
Philippe Elie
On Sunday 06 July 2003 14:23, Philippe Elie wrote:
> alpha user space .h define uint64_t as unsigned long,
> include/asm-alpha/types.h defines it as unsigned long long.
Why is that? Isn't uint64_t supposed to be _always_ a 64bit
unsigned integer? Either the kernel or the user space might
be doing the wrong thing...
I've Cc'd the Alpha mantainer to make him aware of this
problem.
> Using a different definition (if it's possible) will be
> confusing. Using the same definition as user space means
> than code like:
>
> uint64 t u;
> printk("%lu", u);
>
> will not compile on alpha. This problem is solved in C99
> by using PRI_xxx format specifier macro, I'm not a great
> fan of this idea.
This is ugly, but there is no way around it. No matter what
typedefs you're using, C99 or not, printf size specifiers are
always bound to plain C types, whose size varies from
platform to platform.
> surely vim allow to define your own set of type ?
Yeah, but not if you're lazy ;-)
--
// Bernardo Innocenti - Develer S.r.l., R&D dept.
\X/ http://www.develer.com/
Please don't send Word attachments - http://www.gnu.org/philosophy/no-word-attachments.html
On Sun, Jul 06, 2003 at 07:37:26PM +0200, Bernardo Innocenti wrote:
> On Sunday 06 July 2003 14:23, Philippe Elie wrote:
>
> > alpha user space .h define uint64_t as unsigned long,
> > include/asm-alpha/types.h defines it as unsigned long long.
>
> Why is that? Isn't uint64_t supposed to be _always_ a 64bit
> unsigned integer? Either the kernel or the user space might
> be doing the wrong thing...
>
> I've Cc'd the Alpha mantainer to make him aware of this
> problem.
I suppose both an 'unsigned long' and 'unsigned long long' are 64-bit
entities on the Alpha (which is a 64-bit architecture).
--
Vojtech Pavlik
SuSE Labs, SuSE CR
Vojtech Pavlik writes:
> On Sun, Jul 06, 2003 at 07:37:26PM +0200, Bernardo Innocenti wrote:
>> On Sunday 06 July 2003 14:23, Philippe Elie wrote:
>>> alpha user space .h define uint64_t as unsigned long,
>>> include/asm-alpha/types.h defines it as unsigned long long.
>>
>> Why is that? Isn't uint64_t supposed to be _always_ a 64bit
>> unsigned integer? Either the kernel or the user space might
>> be doing the wrong thing...
>>
>> I've Cc'd the Alpha mantainer to make him aware of this
>> problem.
>
> I suppose both an 'unsigned long' and 'unsigned long long'
> are 64-bit entities on the Alpha (which is a 64-bit
> architecture).
Sure, both are "correct", but there would be a lot less
pain and suffering in the world if "unsigned long long"
would be used for 64-bit. It ought to be at least 40 years
before 128-bit types begin to matter. In the Linux world,
we can consider "long long" to be 64-bit, "int" to be
32-bit, and "long" to be the same size as a pointer.
Then we can ditch the nasty casts:
sprintf(foo, "%llu", (unsigned long long)bar);
This leaves only Win64, Win16, DOS, and ELKS out in
the cold. Like we should care for kernel & glibc!
Bernardo Innocenti wrote:
> On Sunday 06 July 2003 14:23, Philippe Elie wrote:
> > alpha user space .h define uint64_t as unsigned long,
> > include/asm-alpha/types.h defines it as unsigned long long.
>
> Why is that? Isn't uint64_t supposed to be _always_ a 64bit
> unsigned integer? Either the kernel or the user space might
> be doing the wrong thing...
uint64_t is always a 64-bit type, and in the case given the compiler
emits a warning but the code runs ok.
The problem is that "64-bit long" and "64-bit long long" are different
types with the same representation. Which means they are mostly
interchangeable, with occasional C weirdness.
-- Jamie
On Sun, 06 Jul 2003, Albert Cahalan wrote:
> Sure, both are "correct", but there would be a lot less
> pain and suffering in the world if "unsigned long long"
> would be used for 64-bit.
What if unsigned long long is 96 bit? or 128?
> It ought to be at least 40 years
> before 128-bit types begin to matter.
Yup, and 8-Bit CPU and 640 kB RAM ought to be enough for...
nevermind.
> In the Linux world,
> we can consider "long long" to be 64-bit, "int" to be
> 32-bit, and "long" to be the same size as a pointer.
> Then we can ditch the nasty casts:
> sprintf(foo, "%llu", (unsigned long long)bar);
Speaking of shifting forward to standards:
unsigned char foo = 42;
char bar[42];
sprintf(bar, "%ju", (uintmax_t)foo); // see IEEE Std 1003.1-2001
If that's too ugly, write your own [u]intmax_t-to-char[] converter, then
only the stack is nasty if uintmax_t is 128 bits wide and you're
printing an array uint8_t. :-P
--
Matthias Andree
Matthias Andree writes:
> On Sun, 06 Jul 2003, Albert Cahalan wrote:
>> Sure, both are "correct", but there would be a lot less
>> pain and suffering in the world if "unsigned long long"
>> would be used for 64-bit.
>
> What if unsigned long long is 96 bit? or 128?
I think you're trolling, but just in case not...
The days of non-power-of-two word sizes are
gone for normal computing. Sign-magnitude and
ones-compliment are dead too. Float is IEEE
format, possibly skipping a few costly features.
Nobody is going to go back to the old way.
It's too bad the C99 committee didn't have the
guts to make this official.
As for 128-bit...
>> It ought to be at least 40 years
>> before 128-bit types begin to matter.
>
> Yup, and 8-Bit CPU and 640 kB RAM ought to be enough for...
>
> nevermind.
There's a logrithmic/exponential thing going on.
Measuring bits isn't like measuring kB. It's log(kB),
which interacts with Moore's law to give a plain
linear need for bits. It took about 20 years to eat
through the extra bits we got with 32-bit CPUs. Now
we have twice as many bits to eat through. So that's
40 years right there. People will make do for much
longer though; notice that 8-bit was never comfy
while 32-bit was.
>> In the Linux world,
>> we can consider "long long" to be 64-bit, "int" to be
>> 32-bit, and "long" to be the same size as a pointer.
>> Then we can ditch the nasty casts:
>> sprintf(foo, "%llu", (unsigned long long)bar);
>
> Speaking of shifting forward to standards:
>
> unsigned char foo = 42;
> char bar[42];
> sprintf(bar, "%ju", (uintmax_t)foo); // see IEEE Std 1003.1-2001
>
> If that's too ugly, write your own [u]intmax_t-to-char[]
> converter, then only the stack is nasty if uintmax_t is 128
> bits wide and you're printing an array uint8_t. :-P
Yes, that is too ugly. It's idealistic code.
Readability matters more than worrying about
something which won't happen for over 40 years,
and won't cause Y2K-style problems even then.
If I live that long, I'll need employment anyway.
Perfection is the enemy of good. In practice,
there is a difference between theory and practice.
Etc.
On Mon, 07 Jul 2003, Albert Cahalan wrote:
> > Speaking of shifting forward to standards:
> >
> > unsigned char foo = 42;
> > char bar[42];
> > sprintf(bar, "%ju", (uintmax_t)foo); // see IEEE Std 1003.1-2001
> >
> > If that's too ugly, write your own [u]intmax_t-to-char[]
> > converter, then only the stack is nasty if uintmax_t is 128
> > bits wide and you're printing an array uint8_t. :-P
>
> Yes, that is too ugly. It's idealistic code.
> Readability matters more than worrying about
> something which won't happen for over 40 years,
> and won't cause Y2K-style problems even then.
sprintf doesn't lend itself too well to readibility anyways, and we have
these crutches in gcc to check argument types and all that, not to speak
of %n and other time bombs.
--
Matthias Andree
On Llu, 2003-07-07 at 13:01, Albert Cahalan wrote:
> The days of non-power-of-two word sizes are
> gone for normal computing. Sign-magnitude and
> ones-compliment are dead too. Float is IEEE
> format, possibly skipping a few costly features.
> Nobody is going to go back to the old way.
>
> It's too bad the C99 committee didn't have the
> guts to make this official.
The C99 people have to handle non-normal computing too. C
has lots of little quirks (like pointers off the end of
array rules) from this.