Is there any plan to put a singly-linked list implementation into the kernel? I
mean sure its simple, but we've got the double-linked one there...
It's been brought up periodically, but nothing seems to have come of it.
Would this be welcomed?
Chris
Hi, Chris Friesen wrote:
> Is there any plan to put a singly-linked list implementation into the kernel? I
> mean sure its simple, but we've got the double-linked one there...
>
> It's been brought up periodically, but nothing seems to have come of it.
>
What would you use one for? Just putting stuff in the kernel because it's
not there yet is nonsense.
--
Matthias Urlichs | {M:U} IT Design @ m-u-it.de | [email protected]
Matthias Urlichs wrote:
> Hi, Chris Friesen wrote:
>>Is there any plan to put a singly-linked list implementation into the kernel? I
>>mean sure its simple, but we've got the double-linked one there...
> What would you use one for? Just putting stuff in the kernel because it's
> not there yet is nonsense.
There are various places where there are open-coded single-linked list
implementations. This would just unify them to a single implementation. On a
previous occasion, someone estimated 42 instances where slist_for_each() could
be used in net/core alone.
Chris
Hi, Chris Friesen wrote:
> There are various places where there are open-coded single-linked list
> implementations. This would just unify them to a single implementation. On a
> previous occasion, someone estimated 42 instances where slist_for_each() could
> be used in net/core alone.
>
So, if that bothers you, you should write a generic SLL, and convert a
couple of existing singly-linked lists to it as a proof-of-concept.
I dunno, though -- open-coding a singly-linked list isn't that much of a
problem; compared to a doubly-linked one, there's simply fewer things that
can go horribly wrong. :-/
--
Matthias Urlichs | {M:U} IT Design @ m-u-it.de | [email protected]
Matthias Urlichs wrote:
> I dunno, though -- open-coding a singly-linked list isn't that much of a
> problem; compared to a doubly-linked one, there's simply fewer things that
> can go horribly wrong. :-/
True. This is likely why it hasn't yet been done.
I wonder how many places use the double-linked lists because they're there, not
because they actually need them. If its significant, there could be some space
savings due to only needing one pointer rather than two.
Chris
Salut,
On Wed, Oct 13, 2004 at 08:25:43PM +0200, Matthias Urlichs wrote:
> I dunno, though -- open-coding a singly-linked list isn't that much of a
> problem; compared to a doubly-linked one, there's simply fewer things that
> can go horribly wrong. :-/
The problem is that
1. you have to use circular lists
2. going forward is O(1), going backward is O(N). This doesn't sound
like a problem, but deleting from lists and alike requires you to
go back in the list.
I guess that if you have lists that you edit a lot, double linked
lists should be less overhead. However, if you only walk the lists a
lot, both models should perform equally well.
Insertion is faster, but that's the only good news..
I'm all against them, though. ;)
Tonnerre
Salut,
On Wed, Oct 13, 2004 at 12:55:10PM -0600, Chris Friesen wrote:
> I wonder how many places use the double-linked lists because they're there,
> not because they actually need them. If its significant, there could be
> some space savings due to only needing one pointer rather than two.
Actually, linked lists are mostly used in structure sets, which means
that one pointer more or less doesn't hurt too much, compared to an
O(N) overhead in certain edit actions on the list (prepending,
deleting, moving).
Seems that you can only write small *or* fast code. But that's no
news.
Tonnerre
Tonnerre wrote:
> Salut,
>
> On Wed, Oct 13, 2004 at 08:25:43PM +0200, Matthias Urlichs wrote:
>> I dunno, though -- open-coding a singly-linked list isn't that much of a
>> problem; compared to a doubly-linked one, there's simply fewer things
>> that can go horribly wrong. :-/
>
> The problem is that
>
> 1. you have to use circular lists
>
> 2. going forward is O(1), going backward is O(N). This doesn't sound
> like a problem, but deleting from lists and alike requires you to
> go back in the list.
>
> I guess that if you have lists that you edit a lot, double linked
> lists should be less overhead. However, if you only walk the lists a
> lot, both models should perform equally well.
>
> Insertion is faster, but that's the only good news..
>
> I'm all against them, though. ;)
>
> Tonnerre
And of course there's the ever-infamous 'two-half-linked list' (for want of
a better name). If you really need O(1) access in both directions but the
constant factor isn't terribly important, it *is* (surprisingly) possible
to achieve this without any size overhead in the list elements, though the
code and iterator sizes will be a little higher and the "number of things
that can go horribly wrong will swell impressively".
Read on only if you a) already know what I have in mind, but want to tell my
I described it wrong or b) genuinely enjoy disgustingly clever trickery, or
c) the previous, plus have some specific need to make an uncorrupted
programmer's head explode. Do NOT read on if you are intending to implement
what I describe, unless you have a damned good reason not to use a sane
doubly-linked list :-)
What you do is take advantage of the fact that when iterating, it's usually
pretty straightforward to make your 'iterator' keep track of of the
previous node as the current one. Then, instead of storing a 'next' pointer
or a 'previous' pointer in the node, you store the two XORed together (or
choose your favorite mixing function, there are many viable choices). Then,
when you are at a node, you can recover the pointer for whichever direction
you are travelling in by XORing the mix from the node with your stored
'previous' pointer, thus giving you whichever neighbor you didn't already
know. Then you can move to the next node (prev=current, current=neighbor)
and do it again. Or just swap prev and current if you want to go the other
direction.
So, you end up with:
O(1) forward and reverse iteration
only 1 pointer of overhead per node
but, you pay for it with:
2 pointers of size for a moveable iterator pointing to a node
an extra fetch (though probably still in a register if you're in a tight
loop) and an XOR for every step along the chain.
On Wed, 13 Oct 2004 23:18:10 -0500, Kevin Puetz <[email protected]> wrote:
>
>
> Tonnerre wrote:
>
> > Salut,
> >
> > On Wed, Oct 13, 2004 at 08:25:43PM +0200, Matthias Urlichs wrote:
> >> I dunno, though -- open-coding a singly-linked list isn't that much of a
> >> problem; compared to a doubly-linked one, there's simply fewer things
> >> that can go horribly wrong. :-/
> >
> > The problem is that
> >
> > 1. you have to use circular lists
> >
> > 2. going forward is O(1), going backward is O(N). This doesn't sound
> > like a problem, but deleting from lists and alike requires you to
> > go back in the list.
> >
> > I guess that if you have lists that you edit a lot, double linked
> > lists should be less overhead. However, if you only walk the lists a
> > lot, both models should perform equally well.
> >
> > Insertion is faster, but that's the only good news..
> >
> > I'm all against them, though. ;)
> >
> > Tonnerre
>
> And of course there's the ever-infamous 'two-half-linked list' (for want of
> a better name). If you really need O(1) access in both directions but the
> constant factor isn't terribly important, it *is* (surprisingly) possible
> to achieve this without any size overhead in the list elements, though the
> code and iterator sizes will be a little higher and the "number of things
> that can go horribly wrong will swell impressively".
>
> Read on only if you a) already know what I have in mind, but want to tell my
> I described it wrong or b) genuinely enjoy disgustingly clever trickery, or
> c) the previous, plus have some specific need to make an uncorrupted
> programmer's head explode. Do NOT read on if you are intending to implement
> what I describe, unless you have a damned good reason not to use a sane
> doubly-linked list :-)
>
> What you do is take advantage of the fact that when iterating, it's usually
> pretty straightforward to make your 'iterator' keep track of of the
> previous node as the current one. Then, instead of storing a 'next' pointer
> or a 'previous' pointer in the node, you store the two XORed together (or
> choose your favorite mixing function, there are many viable choices). Then,
> when you are at a node, you can recover the pointer for whichever direction
> you are travelling in by XORing the mix from the node with your stored
> 'previous' pointer, thus giving you whichever neighbor you didn't already
> know. Then you can move to the next node (prev=current, current=neighbor)
> and do it again. Or just swap prev and current if you want to go the other
> direction.
>
> So, you end up with:
> O(1) forward and reverse iteration
> only 1 pointer of overhead per node
>
> but, you pay for it with:
> 2 pointers of size for a moveable iterator pointing to a node
> an extra fetch (though probably still in a register if you're in a tight
> loop) and an XOR for every step along the chain.
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
Some links for the intrigued ones:
http://www.google.com/search?q=xor+linked+list
http://www.fact-index.com/x/xo/xor_linked_list.html
http://www.greatsnakes.com/savannah/xlist.asp
--
Greetz, Antonio Vargas