Is there some feedback on this point ?
Thank you
./Jerry
On Sun, 1 Jul 2007 08:49:37 -0400 (EDT)
"Robert P. J. Day" <[email protected]> wrote:
>
> prompted by the earlier post on "volatile"s, is there a reason that
> most atomic_t typedefs use volatile int's, while the rest don't?
>
> $ grep "typedef.*struct" $(find . -name atomic.h)
> ./include/asm-v850/atomic.h:typedef struct { int counter; } atomic_t;
> ./include/asm-mips/atomic.h:typedef struct { volatile int counter; } atomic_t;
> ./include/asm-mips/atomic.h:typedef struct { volatile long counter; } atomic64_t;
> ...
>
> etc, etc. just curious.
>
> rday
> --
> ========================================================================
> Robert P. J. Day
> Linux Consulting, Training and Annoying Kernel Pedantry
> Waterloo, Ontario, CANADA
>
> http://fsdev.net/wiki/index.php?title=Main_Page
> ========================================================================
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
Jerry Jiang wrote:
> Is there some feedback on this point ?
>
> Thank you
> ./Jerry
>
> On Sun, 1 Jul 2007 08:49:37 -0400 (EDT)
> "Robert P. J. Day" <[email protected]> wrote:
>
>> prompted by the earlier post on "volatile"s, is there a reason that
>> most atomic_t typedefs use volatile int's, while the rest don't?
>>
>> $ grep "typedef.*struct" $(find . -name atomic.h)
>> ./include/asm-v850/atomic.h:typedef struct { int counter; } atomic_t;
>> ./include/asm-mips/atomic.h:typedef struct { volatile int counter; } atomic_t;
>> ./include/asm-mips/atomic.h:typedef struct { volatile long counter; } atomic64_t;
>> ...
>>
>> etc, etc. just curious.
If your architecture doesn't support SMP, the volatile keyword doesn't do
anything except add a useless memory fetch. Also, some SMP architectures (i386,
x86_64, s390) provide sufficiently strong guarantees about memory access
ordering that it's not necessary as long as you're using the appropriate
locked/atomic instructions in the atomic operations.
-- Chris
Chris Snook wrote:
> If your architecture doesn't support SMP, the volatile keyword doesn't
> do anything except add a useless memory fetch.
I was under the impression that there were other cases as well
(interrupt handlers, for instance) where the value could be modified
"behind the back" of the current code.
It seems like this would fall more into the case of the arch providing
guarantees when using locked/atomic access rather than anything
SMP-related, no?.
Chris
Chris Friesen wrote:
> Chris Snook wrote:
>
>> If your architecture doesn't support SMP, the volatile keyword doesn't
>> do anything except add a useless memory fetch.
>
> I was under the impression that there were other cases as well
> (interrupt handlers, for instance) where the value could be modified
> "behind the back" of the current code.
When you're accessing data that could be modified by an interrupt handler, you
generally use a function that calls arch-specific inline assembler to explicitly
fetch it from memory.
> It seems like this would fall more into the case of the arch providing
> guarantees when using locked/atomic access rather than anything
> SMP-related, no?.
But if you're not using SMP, the only way you get a race condition is if your
compiler is reordering instructions that have side effects which are invisible
to the compiler. This can happen with MMIO registers, but it's not an issue
with an atomic_t we're declaring in real memory.
-- Chris
Chris Snook wrote:
> But if you're not using SMP, the only way you get a race condition is if
> your compiler is reordering instructions that have side effects which
> are invisible to the compiler. This can happen with MMIO registers, but
> it's not an issue with an atomic_t we're declaring in real memory.
I refer back to the interrupt handler case. Suppose we have:
while(!atomic_read(flag))
continue;
where flag is an atomic_t that is set in an interrupt handler, the
volatile may be necessary on some architectures to force the compiler to
re-read "flag" each time through the loop.
Without the "volatile", the compiler could be perfectly within its
rights to evaluate "flag" once and create an infinite loop.
Now I'm not trying to say that we should explictly use "volatile" in
common code, but that it is possible that it is required within the
arch-specific atomic_t accessors even on uniprocessor systems.
Chris
Chris Friesen wrote:
> Chris Snook wrote:
>
>> But if you're not using SMP, the only way you get a race condition is
>> if your compiler is reordering instructions that have side effects
>> which are invisible to the compiler. This can happen with MMIO
>> registers, but it's not an issue with an atomic_t we're declaring in
>> real memory.
>
> I refer back to the interrupt handler case. Suppose we have:
>
> while(!atomic_read(flag))
> continue;
>
> where flag is an atomic_t that is set in an interrupt handler, the
> volatile may be necessary on some architectures to force the compiler to
> re-read "flag" each time through the loop.
>
> Without the "volatile", the compiler could be perfectly within its
> rights to evaluate "flag" once and create an infinite loop.
>
> Now I'm not trying to say that we should explictly use "volatile" in
> common code, but that it is possible that it is required within the
> arch-specific atomic_t accessors even on uniprocessor systems.
>
> Chris
That's why we define atomic_read like so:
#define atomic_read(v) ((v)->counter)
This avoids the aliasing problem, because the compiler must de-reference the
pointer every time, which requires a memory fetch. This is usually fast thanks
to caching, and hardware cache invalidation enforces correctness when it does
change.
Chris Snook wrote:
> That's why we define atomic_read like so:
>
> #define atomic_read(v) ((v)->counter)
>
> This avoids the aliasing problem, because the compiler must de-reference
> the pointer every time, which requires a memory fetch.
Can you guarantee that the pointer dereference cannot be optimised away
on any architecture? Without other restrictions, a suficiently
intelligent optimiser could notice that the address of v doesn't change
in the loop and the destination is never written within the loop, so the
read could be hoisted out of the loop.
Even now, powerpc (as an example) defines atomic_t as:
typedef struct { volatile int counter; } atomic_t
That volatile is there precisely to force the compiler to dereference it
every single time.
Chris
Chris Friesen wrote:
> Chris Snook wrote:
>
>> That's why we define atomic_read like so:
>>
>> #define atomic_read(v) ((v)->counter)
>>
>> This avoids the aliasing problem, because the compiler must
>> de-reference the pointer every time, which requires a memory fetch.
>
> Can you guarantee that the pointer dereference cannot be optimised away
> on any architecture? Without other restrictions, a suficiently
> intelligent optimiser could notice that the address of v doesn't change
> in the loop and the destination is never written within the loop, so the
> read could be hoisted out of the loop.
That would be a compiler bug.
> Even now, powerpc (as an example) defines atomic_t as:
>
> typedef struct { volatile int counter; } atomic_t
>
>
> That volatile is there precisely to force the compiler to dereference it
> every single time.
On most superscalar architectures, including powerpc, multiple instructions can
be in flight simultaneously, potentially even reading and writing the same data.
When the compiler detects data dependencies within a thread of execution, it
will do the right thing. Putting the volatile keyword in here instructs the
compiler to serialize accesses to this data even if it does not detect dependencies.
It's worth noting that all of the SMP architectures which lack the volatile
keyword in their atomic_t definition inherit memory access semantics from ISAs
that predate the advent of heavily-pipelined superscalar design. i386 and
x86_64 get theirs from at least as far back as the 8086. I believe s390(x)
inherits this from the s/370 ISA. These ISAs assume strictly serialized memory
access, and anything binary-compatible with them must enforce this in hardware,
even at the expense of performance. Modern ISAs that lack legacy baggage do
away with this guarantee, putting the burden on the compiler to enforce
serialization. When the compiler can't detect that it's needed, we use volatile
to inform it explicitly.
-- Chris
On Aug 7 2007 15:38, Chris Friesen wrote:
> Even now, powerpc (as an example) defines atomic_t as:
>
> typedef struct { volatile int counter; } atomic_t
>
>
> That volatile is there precisely to force the compiler to dereference it every
> single time.
Actually, the dereference will be done once (or more often if registers
are short or the compiler does not feel like keeping it around),
and the read from memory will be done on every iteration ;-)
Jan
--
On Tue, 2007-08-07 at 15:38 -0600, Chris Friesen wrote:
> Chris Snook wrote:
>
> > That's why we define atomic_read like so:
> >
> > #define atomic_read(v) ((v)->counter)
> >
> > This avoids the aliasing problem, because the compiler must de-reference
> > the pointer every time, which requires a memory fetch.
>
> Can you guarantee that the pointer dereference cannot be optimised away
> on any architecture? Without other restrictions, a suficiently
> intelligent optimiser could notice that the address of v doesn't change
> in the loop and the destination is never written within the loop, so the
> read could be hoisted out of the loop.
>
> Even now, powerpc (as an example) defines atomic_t as:
>
> typedef struct { volatile int counter; } atomic_t
>
>
> That volatile is there precisely to force the compiler to dereference it
> every single time.
I just tried this with GCC 4.2 on x86_64 because I was curious.
struct counter_t { volatile int counter; } test;
struct counter_t *tptr = &test;
int main() {
int i;
tptr->counter = 0;
i = 0;
while(tptr->counter < 100) {
i++;
}
return 0;
}
$ gcc -O3 -S t.c
a snippet of t.s:
main:
.LFB2:
movq tptr(%rip), %rdx
movl $0, (%rdx)
.p2align 4,,7
.L2:
movl (%rdx), %eax
cmpl $99, %eax
jle .L2
Now with the volatile removed:
main:
.LFB2:
movq tptr(%rip), %rax
movl $0, (%rax)
.L2:
jmp .L2
If the compiler can see it clearly, it will optimize out the load
without the volatile.
--
Zan Lynx <[email protected]>
Chris Snook wrote:
> Chris Friesen wrote:
>> Without other restrictions, a suficiently
>> intelligent optimiser could notice that the address of v doesn't
>> change in the loop and the destination is never written within the
>> loop, so the read could be hoisted out of the loop.
> That would be a compiler bug.
Could you elaborate? From the point of view of the compiler, it "knows"
that the variable doesn't change inside the loop.
In the "volatile considered evil" discussion in May of this year, Alan
Cox explicitly mentioned the implementation of atomic primitives as a
case where "volatile" might be required.
> On most superscalar architectures, including powerpc, multiple
> instructions can be in flight simultaneously, potentially even reading
> and writing the same data. When the compiler detects data dependencies
> within a thread of execution, it will do the right thing.
In the example I gave, as far as the compiler can detect there are no
dependencies. The code that changes the value is in a different
compilation unit.
> Modern ISAs that lack legacy baggage do away
> with this guarantee, putting the burden on the compiler to enforce
> serialization. When the compiler can't detect that it's needed, we use
> volatile to inform it explicitly.
I certainly agree with this statement.
This leads logically to the question of whether there are cases where
the compiler cannot detect that serialization is needed when
implementing atomic_t accessor functions. Previously in this thread
you've said that there are not, while I've attempted to show that it is
possible.
Chris
Jan Engelhardt wrote:
> On Aug 7 2007 15:38, Chris Friesen wrote:
>>That volatile is there precisely to force the compiler to dereference it every
>>single time.
> Actually, the dereference will be done once (or more often if registers
> are short or the compiler does not feel like keeping it around),
> and the read from memory will be done on every iteration ;-)
My bad. You are, of course, correct. :)
Chris
Zan Lynx wrote:
> On Tue, 2007-08-07 at 15:38 -0600, Chris Friesen wrote:
>> Chris Snook wrote:
>>
>>> That's why we define atomic_read like so:
>>>
>>> #define atomic_read(v) ((v)->counter)
>>>
>>> This avoids the aliasing problem, because the compiler must de-reference
>>> the pointer every time, which requires a memory fetch.
>> Can you guarantee that the pointer dereference cannot be optimised away
>> on any architecture? Without other restrictions, a suficiently
>> intelligent optimiser could notice that the address of v doesn't change
>> in the loop and the destination is never written within the loop, so the
>> read could be hoisted out of the loop.
>>
>> Even now, powerpc (as an example) defines atomic_t as:
>>
>> typedef struct { volatile int counter; } atomic_t
>>
>>
>> That volatile is there precisely to force the compiler to dereference it
>> every single time.
>
> I just tried this with GCC 4.2 on x86_64 because I was curious.
>
> struct counter_t { volatile int counter; } test;
> struct counter_t *tptr = &test;
>
> int main() {
> int i;
>
> tptr->counter = 0;
> i = 0;
> while(tptr->counter < 100) {
> i++;
> }
> return 0;
> }
>
> $ gcc -O3 -S t.c
>
> a snippet of t.s:
> main:
> .LFB2:
> movq tptr(%rip), %rdx
> movl $0, (%rdx)
> .p2align 4,,7
> .L2:
> movl (%rdx), %eax
> cmpl $99, %eax
> jle .L2
>
>
> Now with the volatile removed:
> main:
> .LFB2:
> movq tptr(%rip), %rax
> movl $0, (%rax)
> .L2:
> jmp .L2
>
> If the compiler can see it clearly, it will optimize out the load
> without the volatile.
This is not a problem, since indirect references will cause the CPU to fetch the
data from memory/cache anyway.
-- Chris
On Tue, 07 Aug 2007 16:32:23 -0400
Chris Snook <[email protected]> wrote:
> > It seems like this would fall more into the case of the arch providing
> > guarantees when using locked/atomic access rather than anything
> > SMP-related, no?.
>
> But if you're not using SMP, the only way you get a race condition is if your
> compiler is reordering instructions that have side effects which are invisible
> to the compiler. This can happen with MMIO registers, but it's not an issue
> with an atomic_t we're declaring in real memory.
>
Under non-SMP, some compilers would reordering instructions as they think
and C standard informally guarantees all operations on volatile data
are executed in the sequence in which they appear in the source code,
right?
So no reordering happens with volatile, right?
-- Jerry
> -- Chris
Chris Snook wrote:
> This is not a problem, since indirect references will cause the CPU to
> fetch the data from memory/cache anyway.
Isn't Zan's sample code (that shows the problem) already using indirect
references?
Chris
Jerry Jiang wrote:
> On Tue, 07 Aug 2007 16:32:23 -0400
> Chris Snook <[email protected]> wrote:
>
>>> It seems like this would fall more into the case of the arch providing
>>> guarantees when using locked/atomic access rather than anything
>>> SMP-related, no?.
>> But if you're not using SMP, the only way you get a race condition is if your
>> compiler is reordering instructions that have side effects which are invisible
>> to the compiler. This can happen with MMIO registers, but it's not an issue
>> with an atomic_t we're declaring in real memory.
>>
>
> Under non-SMP, some compilers would reordering instructions as they think
> and C standard informally guarantees all operations on volatile data
> are executed in the sequence in which they appear in the source code,
> right?
>
> So no reordering happens with volatile, right?
Plenty of reordering happens with volatile, but on VLIW, EPIC, and
similar architectures, it ensures that accesses to the variable in
question will not be compiled into instruction slots that can execute
simultaneously.
-- Chris
Chris Friesen wrote:
> Chris Snook wrote:
>
>> This is not a problem, since indirect references will cause the CPU to
>> fetch the data from memory/cache anyway.
>
> Isn't Zan's sample code (that shows the problem) already using indirect
> references?
Yeah, I misinterpreted his conclusion. I thought about this for a
while, and realized that it's perfectly legal for the compiler to re-use
a value obtained from atomic_read. All that matters is that the read
itself was atomic. The use (or non-use) of the volatile keyword is
really more relevant to the other atomic operations. If you want to
guarantee a re-read from memory, use barrier(). This, incidentally,
uses volatile under the hood.
-- Chris
On Wed, 08 Aug 2007 02:47:53 -0400
Chris Snook <[email protected]> wrote:
> Chris Friesen wrote:
> > Chris Snook wrote:
> >
> >> This is not a problem, since indirect references will cause the CPU to
> >> fetch the data from memory/cache anyway.
> >
> > Isn't Zan's sample code (that shows the problem) already using indirect
> > references?
>
> Yeah, I misinterpreted his conclusion. I thought about this for a
> while, and realized that it's perfectly legal for the compiler to re-use
> a value obtained from atomic_read. All that matters is that the read
> itself was atomic. The use (or non-use) of the volatile keyword is
Sorry, I can't understand it a bit .., Could you do in detail?
-- Jerry
> really more relevant to the other atomic operations. If you want to
> guarantee a re-read from memory, use barrier(). This, incidentally,
> uses volatile under the hood.
>
> -- Chris
On Wed, 08 Aug 2007 02:47:53 -0400
Chris Snook <[email protected]> wrote:
> Chris Friesen wrote:
> > Chris Snook wrote:
> >
> >> This is not a problem, since indirect references will cause the CPU to
> >> fetch the data from memory/cache anyway.
> >
> > Isn't Zan's sample code (that shows the problem) already using indirect
> > references?
>
> Yeah, I misinterpreted his conclusion. I thought about this for a
> while, and realized that it's perfectly legal for the compiler to re-use
> a value obtained from atomic_read. All that matters is that the read
> itself was atomic. The use (or non-use) of the volatile keyword is
> really more relevant to the other atomic operations. If you want to
> guarantee a re-read from memory, use barrier(). This, incidentally,
> uses volatile under the hood.
>
So for example, without volatile
int a = read_atomic(v);
int b = read_atomic(v);
the compiler will optimize it as b = a,
But with volatile, it will be forced to fetch v's value from memory
again.
So, come back our initial question,
include/asm-v850/atomic.h:typedef struct { int counter; } atomic_t;
Why is it right without volatile?
-- Jerry
> -- Chris
Jerry Jiang wrote:
> On Wed, 08 Aug 2007 02:47:53 -0400
> Chris Snook <[email protected]> wrote:
>
>> Chris Friesen wrote:
>>> Chris Snook wrote:
>>>
>>>> This is not a problem, since indirect references will cause the CPU to
>>>> fetch the data from memory/cache anyway.
>>> Isn't Zan's sample code (that shows the problem) already using indirect
>>> references?
>> Yeah, I misinterpreted his conclusion. I thought about this for a
>> while, and realized that it's perfectly legal for the compiler to re-use
>> a value obtained from atomic_read. All that matters is that the read
>> itself was atomic. The use (or non-use) of the volatile keyword is
>> really more relevant to the other atomic operations. If you want to
>> guarantee a re-read from memory, use barrier(). This, incidentally,
>> uses volatile under the hood.
>>
>
>
> So for example, without volatile
>
> int a = read_atomic(v);
> int b = read_atomic(v);
>
> the compiler will optimize it as b = a,
> But with volatile, it will be forced to fetch v's value from memory
> again.
>
> So, come back our initial question,
>
> include/asm-v850/atomic.h:typedef struct { int counter; } atomic_t;
>
> Why is it right without volatile?
Because atomic_t doesn't promise a memory fetch every time. It merely
promises that any atomic_* operations will, in fact, be atomic. For
example, posted today:
http://lkml.org/lkml/2007/8/8/122
-- Chris
On Wed, 8 Aug 2007, Chris Snook wrote:
> Jerry Jiang wrote:
> > On Wed, 08 Aug 2007 02:47:53 -0400
> > Chris Snook <[email protected]> wrote:
> >
> > > Chris Friesen wrote:
> > > > Chris Snook wrote:
> > > >
> > > > > This is not a problem, since indirect references will cause the CPU to
> > > > > fetch the data from memory/cache anyway.
> > > > Isn't Zan's sample code (that shows the problem) already using indirect
> > > > references?
> > > Yeah, I misinterpreted his conclusion. I thought about this for a while,
> > > and realized that it's perfectly legal for the compiler to re-use a value
> > > obtained from atomic_read. All that matters is that the read itself was
> > > atomic. The use (or non-use) of the volatile keyword is really more
> > > relevant to the other atomic operations. If you want to guarantee a
> > > re-read from memory, use barrier(). This, incidentally, uses volatile
> > > under the hood.
> > >
> >
> >
> > So for example, without volatile
> >
> > int a = read_atomic(v);
> > int b = read_atomic(v);
> >
> > the compiler will optimize it as b = a, But with volatile, it will be forced
> > to fetch v's value from memory
> > again.
> >
> > So, come back our initial question,
> > include/asm-v850/atomic.h:typedef struct { int counter; } atomic_t;
> >
> > Why is it right without volatile?
>
> Because atomic_t doesn't promise a memory fetch every time. It merely
> promises that any atomic_* operations will, in fact, be atomic. For example,
> posted today:
>
> http://lkml.org/lkml/2007/8/8/122
i'm sure that, when this is all done, i'll finally have an answer to
my original question, "why are some atomic_t's not volatile, while
most are?"
i'm almost scared to ask any more questions. :-)
rday
--
========================================================================
Robert P. J. Day
Linux Consulting, Training and Annoying Kernel Pedantry
Waterloo, Ontario, CANADA
http://fsdev.net/wiki/index.php?title=Main_Page
========================================================================
Robert P. J. Day wrote:
> On Wed, 8 Aug 2007, Chris Snook wrote:
>
>> Jerry Jiang wrote:
>>> On Wed, 08 Aug 2007 02:47:53 -0400
>>> Chris Snook <[email protected]> wrote:
>>>
>>>> Chris Friesen wrote:
>>>>> Chris Snook wrote:
>>>>>
>>>>>> This is not a problem, since indirect references will cause the CPU to
>>>>>> fetch the data from memory/cache anyway.
>>>>> Isn't Zan's sample code (that shows the problem) already using indirect
>>>>> references?
>>>> Yeah, I misinterpreted his conclusion. I thought about this for a while,
>>>> and realized that it's perfectly legal for the compiler to re-use a value
>>>> obtained from atomic_read. All that matters is that the read itself was
>>>> atomic. The use (or non-use) of the volatile keyword is really more
>>>> relevant to the other atomic operations. If you want to guarantee a
>>>> re-read from memory, use barrier(). This, incidentally, uses volatile
>>>> under the hood.
>>>>
>>>
>>> So for example, without volatile
>>>
>>> int a = read_atomic(v);
>>> int b = read_atomic(v);
>>>
>>> the compiler will optimize it as b = a, But with volatile, it will be forced
>>> to fetch v's value from memory
>>> again.
>>>
>>> So, come back our initial question,
>>> include/asm-v850/atomic.h:typedef struct { int counter; } atomic_t;
>>>
>>> Why is it right without volatile?
>> Because atomic_t doesn't promise a memory fetch every time. It merely
>> promises that any atomic_* operations will, in fact, be atomic. For example,
>> posted today:
>>
>> http://lkml.org/lkml/2007/8/8/122
>
> i'm sure that, when this is all done, i'll finally have an answer to
> my original question, "why are some atomic_t's not volatile, while
> most are?"
>
> i'm almost scared to ask any more questions. :-)
>
> rday
Momentarily I'll be posting a patchset that makes all atomic_t and atomic64_t
declarations non-volatile, and casts them to volatile inside of atomic[64]_read.
This will ensure consistent behavior across all architectures, and is in
keeping with the philosophy that memory reads should be enforced in running
code, not declarations.
I hope you don't mind that we're mooting the question by making the code more
sensible.
-- Chris
On Thu, 9 Aug 2007, Chris Snook wrote:
> Robert P. J. Day wrote:
> > i'm almost scared to ask any more questions. :-)
> >
> > rday
>
> Momentarily I'll be posting a patchset that makes all atomic_t and
> atomic64_t declarations non-volatile, and casts them to volatile
> inside of atomic[64]_read. This will ensure consistent behavior
> across all architectures, and is in keeping with the philosophy that
> memory reads should be enforced in running code, not declarations.
>
> I hope you don't mind that we're mooting the question by making the
> code more sensible.
not at all, but it does bring up the obvious next question -- once all
these definitions are made consistent, is there any reason some of
that content can't be centralized in a single atomic.h header file,
rather than duplicating it across a couple dozen architectures?
surely, after this process, there's going to be some content that's
identical across all arches, no?
rday
========================================================================
Robert P. J. Day
Linux Consulting, Training and Annoying Kernel Pedantry
Waterloo, Ontario, CANADA
http://fsdev.net/wiki/index.php?title=Main_Page
========================================================================
On Thu, 9 Aug 2007, Robert P. J. Day wrote:
> On Thu, 9 Aug 2007, Chris Snook wrote:
>
> > Robert P. J. Day wrote:
>
> > > i'm almost scared to ask any more questions. :-)
> > >
> > > rday
> >
> > Momentarily I'll be posting a patchset that makes all atomic_t and
> > atomic64_t declarations non-volatile, and casts them to volatile
> > inside of atomic[64]_read. This will ensure consistent behavior
> > across all architectures, and is in keeping with the philosophy that
> > memory reads should be enforced in running code, not declarations.
> >
> > I hope you don't mind that we're mooting the question by making the
> > code more sensible.
>
> not at all, but it does bring up the obvious next question -- once all
> these definitions are made consistent, is there any reason some of
> that content can't be centralized in a single atomic.h header file,
> rather than duplicating it across a couple dozen architectures?
>
> surely, after this process, there's going to be some content that's
> identical across all arches, no?
>
> rday
whoops, never mind, i just saw that earlier posting on this very
subject.
rday
--
========================================================================
Robert P. J. Day
Linux Consulting, Training and Annoying Kernel Pedantry
Waterloo, Ontario, CANADA
http://fsdev.net/wiki/index.php?title=Main_Page
========================================================================