2001-03-09 07:39:35

by Manoj Sontakke

[permalink] [raw]
Subject: quicksort for linked list

Hi
Sorry, these questions do not belog here but i could not find any
better place.

1. Is quicksort on doubly linked list is implemented anywhere? I need it
for sk_buff queues.
2. Is Weighted Round Robin implemented in linux anyehere?

thanks in advence.
Manoj


2001-03-09 08:39:36

by Helge Hafting

[permalink] [raw]
Subject: Re: quicksort for linked list

Manoj Sontakke wrote:
>
> Hi
> Sorry, these questions do not belog here but i could not find any
> better place.
>
> 1. Is quicksort on doubly linked list is implemented anywhere? I need it
> for sk_buff queues.

I cannot see how the quicksort algorithm could work on a doubly
linked list, as it relies on being able to look
up elements directly as in an array.

You can probably find algorithms for sorting a linked list, but
it won't be quicksort.

You can however quicksort the list _if_ you have room enough for an
additional data structure:

1. find out how many elements there are. (Count them if necessary)
2. Allocate a pointer array of this size.
3. fill the pointer array with pointers to list members.
4. quicksort the pointer array
5. Traverse the pointer array and set the links for each
list member to point to next/previous element pointed
to by the array. Now you have a sorted linked list!

Steps 1,2,3 & 5 are all O(n), better than the O(nlgn) for
quicksort.


Helge Hafting

2001-03-09 11:11:15

by James R Bruce

[permalink] [raw]
Subject: Re: quicksort for linked list


Quicksort works just fine on a linked list, as long as you broaden
your view beyond the common array-based implementations. See
"http://www.cs.cmu.edu/~jbruce/sort.cc" for an example, although I
would recommend using a radix sort for linked lists in most situations
(sorry for the C++, but it was handy...).

9-Mar-2001 Re: quicksort for linked list by Helge [email protected]
> Manoj Sontakke wrote:
> >
> > Hi
> > Sorry, these questions do not belog here but i could not find any
> > better place.
> >
> > 1. Is quicksort on doubly linked list is implemented anywhere? I need it
> > for sk_buff queues.
>
> I cannot see how the quicksort algorithm could work on a doubly
> linked list, as it relies on being able to look
> up elements directly as in an array.
>
> You can probably find algorithms for sorting a linked list, but
> it won't be quicksort.

It's quicksort as long as you do pivot/split, the array gymnastics
that most implementations do to avoid updating array elements isn't
really critical to its operation.

> 1. find out how many elements there are. (Count them if necessary)
> 2. Allocate a pointer array of this size.
> 3. fill the pointer array with pointers to list members.
> 4. quicksort the pointer array
> 5. Traverse the pointer array and set the links for each
> list member to point to next/previous element pointed
> to by the array. Now you have a sorted linked list!

I think a radix sort like the following would work better with about
the same (or less) storage, provided you're comparing ints (this is
for a kernel modification after all, right?). You just need to
determine the number of passes to cover all the bits in the numbers
you want to sort.

- Jim Bruce


#define RADIX_BITS 6
#define RADIX (1 << RADIX_BITS)
#define RADIX_MASK (RADIX - 1)

struct item *radix_sort(struct item *list,int passes)
// Sort list, largest first
{
struct item *tbl[RADIX],*p,*pn;
int slot,shift;
int i,j;

// Handle trivial cases
if(!list || !list->next) return(list);

// Initialize table
for(j=0; j<RADIX; j++) tbl[j] = NULL;

for(i=0; i<passes; i++){
// split list into buckets
shift = RADIX_BITS * i;
p = list;

while(p){
pn = p->next;
slot = ((p->key) >> shift) & RADIX_MASK;
p->next = tbl[slot];
tbl[slot] = p;
p = pn;
}

// integrate back into partially ordered list
list = NULL;
for(j=0; j<RADIX; j++){
p = tbl[j];
tbl[j] = NULL; // clear out table for next pass
while(p){
pn = p->next;
p->next = list;
list = p;
p = pn;
}
}
}

// fix prev pointers in list
list->prev = NULL;
p = list;
while(pn = p->next){
pn->prev = p;
p = pn;
}

return(list);
}

2001-03-09 11:48:11

by Alan

[permalink] [raw]
Subject: Re: quicksort for linked list

> Quicksort works just fine on a linked list, as long as you broaden
> your view beyond the common array-based implementations. See
> "http://www.cs.cmu.edu/~jbruce/sort.cc" for an example, although I
> would recommend using a radix sort for linked lists in most situations
> (sorry for the C++, but it was handy...).

In a network environment however its not so good. Quicksort has an N^2
worst case and the input is controlled by a potential enemy.

Im dubious about anyone doing more than simple bucket sorting for packets.

2001-03-09 11:53:32

by Rogier Wolff

[permalink] [raw]
Subject: Re: quicksort for linked list

Helge Hafting wrote:
> Manoj Sontakke wrote:
> >
> > Hi
> > Sorry, these questions do not belog here but i could not find any
> > better place.
> >
> > 1. Is quicksort on doubly linked list is implemented anywhere? I need it
> > for sk_buff queues.
>
> I cannot see how the quicksort algorithm could work on a doubly
> linked list, as it relies on being able to look
> up elements directly as in an array.

It took me a few moments to realize, but quicksort is one algorithm
that DOES NOT rely on directly accessing array elements.

qsort (items)
{
if (numberof (items) <= 1) return.
pivot = chose_pivot (items)
for (all items)
if (curitem < pivot) put on the left of pivot
else put on the right of pivot
qsort (items on the left of pivot);
qsort (items on the right of pivot);
}

All operations are easily done on lists, not only on arrays. Actually,
the array-implementation has a few thingies to avoid having to move
the half the array on the scan of one item. With a list that is not an
issue.

If you know how you chose your pivot, one of the "puts" can be
nil. (for example, chose the pivot as the leftmost item. All other
items are already on the right. So "put on the left of pivot" is
"unlink (curitem), relink_to_the_left (pivot, curitem)", but put on
the right is "/* nothing to be done */".

Quicksort however is an algorithm that is recursive. This means that
it can use unbounded amounts of stack -> This is not for the kernel.

Quicksort however is an algorithm that is good for large numbers of
elements to be sorted: the overhead of a small set of items to sort is
very large. Is the "normal" case indeed "large sets"?

Quicksort has a very bad "worst case": quadratic sort-time. Are you
sure this won't happen?

Isn't it easier to do "insertion sort": Keep the lists sorted, and
insert the item at the right place when you get the new item.

Roger.

--
** [email protected] ** http://www.BitWizard.nl/ ** +31-15-2137555 **
*-- BitWizard writes Linux device drivers for any device you may have! --*
* There are old pilots, and there are bold pilots.
* There are also old, bald pilots.

2001-03-09 12:09:54

by Thomas Pornin

[permalink] [raw]
Subject: Re: quicksort for linked list

In article <[email protected]> you write:
> Quicksort however is an algorithm that is recursive. This means that
> it can use unbounded amounts of stack -> This is not for the kernel.

Maybe a heapsort, then. It is guaranteed O(n*log n), even for worst
case, and non-recursive. Yet it implies a significantly larger amount of
comparisons than quicksort (about twice, I think).

Insertion sort will be better anyway for small sets of data (for 5 or
less elements).


--Thomas Pornin

2001-03-09 13:45:35

by James Lewis Nance

[permalink] [raw]
Subject: Re: quicksort for linked list

On Fri, Mar 09, 2001 at 01:08:57PM +0530, Manoj Sontakke wrote:
> Hi
> Sorry, these questions do not belog here but i could not find any
> better place.
>
> 1. Is quicksort on doubly linked list is implemented anywhere? I need it
> for sk_buff queues.

I would suggest that you use merge sort. It is ideally suited for sorting
linked lists, and it always has N log N running time. I dont know of an
existing implementation in the kernel sources, but it should be easy to
write one. I did a google search on "merge sort" "linked list" and it
comes up with lots of links. Here is a good one:

http://www.ddj.com/articles/1998/9805/9805p/9805p.htm?topic=java

Hope this helps,

Jim

2001-03-09 18:25:19

by Oliver Xymoron

[permalink] [raw]
Subject: Re: quicksort for linked list

On Fri, 9 Mar 2001, Helge Hafting wrote:

> Manoj Sontakke wrote:
> >
> > 1. Is quicksort on doubly linked list is implemented anywhere? I need it
> > for sk_buff queues.
>
> I cannot see how the quicksort algorithm could work on a doubly
> linked list, as it relies on being able to look
> up elements directly as in an array.
>
> You can probably find algorithms for sorting a linked list, but
> it won't be quicksort.

Here ya go (wrote this a few years ago):

// This function is so cool.
template<class T>
void list<T>::qsort(iter l, iter r, cmpfunc *cmp, void *data)
{
if(l==r) return;

iter i(l), p(l);

for(i++; i!=r; i++)
if(cmp(*i, *l, data)<0)
i.swap(++p);

l.swap(p);
qsort(l, p, cmp, data);
qsort(++p, r, cmp, data);
}

Iters are essentially list pointers with increment operations. This is a
fairly direct adaptation of the quicksort in K&R, actually.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-03-09 18:40:40

by Oliver Xymoron

[permalink] [raw]
Subject: Re: quicksort for linked list

On Fri, 9 Mar 2001, Alan Cox wrote:

> > Quicksort works just fine on a linked list, as long as you broaden
> > your view beyond the common array-based implementations. See
> > "http://www.cs.cmu.edu/~jbruce/sort.cc" for an example, although I
> > would recommend using a radix sort for linked lists in most situations
> > (sorry for the C++, but it was handy...).
>
> In a network environment however its not so good. Quicksort has an N^2
> worst case and the input is controlled by a potential enemy.

It's not too hard to patch that up, eg quickersort. N^2 isn't too bad for
short queues anyway especially considering the complexity of the
alternatives.

> Im dubious about anyone doing more than simple bucket sorting for packets.

Assume you mean sorting into hash buckets as opposed to "count the number
of occurrences of each type of element in a narrow range, discarding the
actual element". Most hashes are subvertible too and probably don't fair
any better than N^2.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-03-09 18:54:00

by Oliver Xymoron

[permalink] [raw]
Subject: Re: quicksort for linked list

On Fri, 9 Mar 2001, Rogier Wolff wrote:

> Quicksort however is an algorithm that is recursive. This means that
> it can use unbounded amounts of stack -> This is not for the kernel.

It is of course bounded by the input size, but yes, it can use O(n)
additional memory in the worst case. There's no particular reason this
memory has to be on the stack - it's just convenient.

> Isn't it easier to do "insertion sort": Keep the lists sorted, and
> insert the item at the right place when you get the new item.

Assuming you get your items in sorted order, this is also O(N^2).

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-03-09 22:30:02

by Michal Jaegermann

[permalink] [raw]
Subject: Re: quicksort for linked list

On Fri, Mar 09, 2001 at 12:52:22PM +0100, Rogier Wolff wrote:
>
> Quicksort however is an algorithm that is recursive. This means that
> it can use unbounded amounts of stack -> This is not for the kernel.

Well, not really in this situation, after a simple modification. It is
trivial to show that using "shorter interval sorted first" approach one
can bound an amount of an extra memory, on stack or otherwise, and by a
rather small number. This assumes that one knows what one is sorting -
which is obviously the case here.

Also my copy of Reingold, Nivergelt, Deo from 1977 presents a
"non-recursive" variant of quicksort as a kind of an "old hat" solution.
One would think that this piece of information would spread during those
years. :-) It is a simple exercise anyway.

> Quicksort has a very bad "worst case": quadratic sort-time. Are you
> sure this won't happen?

This is much more serious objection. You can nearly guarantee in an
itended application that somebody will find a way to feed you packets
which will ensure the worst case behaviour. The same gotcha will
probably kill quite a few other ways to sort here.

Michal

2001-03-10 16:16:48

by Jerome Vouillon

[permalink] [raw]
Subject: Re: quicksort for linked list

Oliver Xymoron <[email protected]> writes:

> On Fri, 9 Mar 2001, Rogier Wolff wrote:
>
> > Quicksort however is an algorithm that is recursive. This means that
> > it can use unbounded amounts of stack -> This is not for the kernel.
>
> It is of course bounded by the input size, but yes, it can use O(n)
> additional memory in the worst case. There's no particular reason this
> memory has to be on the stack - it's just convenient.

You only need O(log n) additional memory if you sort the shortest
sublist before the longest one (and turn the second recursive call
into a loop).
As log n is certainly less that 64, one can even consider that
Quicksort only uses a bounded amount of memory.

-- Jerome

2001-03-10 18:51:35

by Martin Mares

[permalink] [raw]
Subject: Re: quicksort for linked list

Hello!

> Well, not really in this situation, after a simple modification. It is
> trivial to show that using "shorter interval sorted first" approach one
> can bound an amount of an extra memory, on stack or otherwise, and by a
> rather small number.

By O(log N) which is in reality a small number :)

Have a nice fortnight
--
Martin `MJ' Mares <[email protected]> <[email protected]> http://atrey.karlin.mff.cuni.cz/~mj/
"Dijkstra probably hates me." -- /usr/src/linux/kernel/sched.c

2001-03-10 19:11:15

by David Wragg

[permalink] [raw]
Subject: Re: quicksort for linked list

[email protected] (Rogier Wolff) writes:
> Quicksort however is an algorithm that is recursive. This means that
> it can use unbounded amounts of stack -> This is not for the kernel.

The implementation of Quicksort for arrays demands a recursive
implementation, but for doubly-linked lists there is a trick that
leads to an iterative implementation. You can implement Quicksort
recursively for singly linked lists, so in a doubly-linked list you
have a spare link in each node while you are doing the sort. You can
hide the stack in those links, so the implementation doesn't need to
be explicitly recursive. At the end of the sort, the "next" links are
correct, so you have to go through and fix up the "prev" links.

> Quicksort however is an algorithm that is good for large numbers of
> elements to be sorted: the overhead of a small set of items to sort is
> very large. Is the "normal" case indeed "large sets"?

Good implementations of Quicksort actually give up on Quicksort when
the list is short, and use an algorithm that is faster for that case
(measurements are required to find out where the boundary between a
short list and a long list lies). If the full list to be sorted is
short, Quicksort will never be involved. If that happens to be the
common case, then fine.

> Quicksort has a very bad "worst case": quadratic sort-time. Are you
> sure this won't happen?

Introsort avoids this by modifying quicksort to resort to a mergesort
when the recursion gets too deep.

For modern machines, I'm not sure that quicksort on a linked list is
typically much cheaper than mergesort on a linked list. The majority
of the potential cost is likely to be in the pointer chasing involved
in bringing the lists into cache, and that will be the same for both.
Once the list is in cache, how much pointer fiddling you do isn't so
important. For lists that don't fit into cache, the advantages of
mergesort should become even greater if the literature on tape and
disk sorts applies (though multiway merges rather than simple binary
merges would be needed to minimize the impact of memory latency).

Given this, mergesort might be generally preferable to quicksort for
linked lists. But I haven't investigated this idea thoroughly. (The
trick described above for avoiding an explicit stack also works for
mergesort.)

> Isn't it easier to do "insertion sort": Keep the lists sorted, and
> insert the item at the right place when you get the new item.

Easier? Yes. Slower? Yes. Does its being slow matter? Depends on
the context.


David Wragg

2001-03-10 23:56:09

by Michal Jaegermann

[permalink] [raw]
Subject: Re: quicksort for linked list

On Sat, Mar 10, 2001 at 07:50:06PM +0100, Martin Mares wrote:
> Hello!
>
> > Well, not really in this situation, after a simple modification. It is
> > trivial to show that using "shorter interval sorted first" approach one
> > can bound an amount of an extra memory, on stack or otherwise, and by a
> > rather small number.
>
> By O(log N) which is in reality a small number :)

Assuming that we sort a full range of 32-bit numbers (pointers on a
32-bit CPU, for example, are numbers of that kind but usually a range
can be narrowed down quite substantially) then with a bit of a careful
programming you need, I think, something like 16 extra 4-byte words or
maybe even a bit less. I do not remember precisely, as I was doing this
exercise a long time ago, but even if this is 32, and you need carefuly
constructed example to need them all these extra cells, I still think
that this is not a huge amount of memory. Especially when every element
of a list you are sorting is likely quite a bit bigger.

Exponents are something which grows these numbers pretty fast. :-)

Michal

2001-03-12 19:21:32

by Jamie Lokier

[permalink] [raw]
Subject: Re: quicksort for linked list

David Wragg wrote:
> For modern machines, I'm not sure that quicksort on a linked list is
> typically much cheaper than mergesort on a linked list.
...
> For lists that don't fit into cache, the advantages of mergesort
> should become even greater if the literature on tape and disk sorts
> applies (though multiway merges rather than simple binary merges would
> be needed to minimize the impact of memory latency).
...
> Given this, mergesort might be generally preferable to quicksort for
> linked lists. But I haven't investigated this idea thoroughly. (The
> trick described above for avoiding an explicit stack also works for
> mergesort.)

Fwiw, below is a pretty good list mergesort. It takes linear time on
"nearly sorted" lists (i.e. better than classic mergesort), and degrades
to O(n log n) worst case (i.e. better than quicksort). It's
non-recursive and uses a small bounded stack.

enjoy ;-)

-- Jamie

/* A macro to sort linked lists efficiently.
O(n log n) worst case, O(n) on nearly sorted lists.

Copyright (C) 1995, 1996, 1999 Jamie Lokier.

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */

#ifndef __fast_merge_sort_h
#define __fast_merge_sort_h

/* {{{ Description */

/* This macro sorts singly-linked lists efficiently.

To sort a doubly-linked list: sort, then traverse the forward
links to regenerate the reverse links.

Written by Jamie Lokier <[email protected]>, 1995-1999.
Version 2.0.

Properties
==========

1. Takes O(n) time to sort a nearly-sorted list; very fast indeed.

2. Takes O(n log n) time for worst case and for random-order average.
Worst case is a reversed list. Sorting is still fast.
NB: This is much faster than unmodified quicksort, which is O(n^2).

3. Requires no extra memory: sorts in place like heapsort and
quicksort. Uses a small array (typically 32 pointers) on the
stack.

4. Stable: equal elements are kept in the same relative order.

5. Macro so the comparisons and structure modifications are in line.
You typically have a C function which calls the macro and does
very little else. The sorting code is not small enough to be worth
inlining in its caller; however, the comparisons and strucure
modifications generally _are_ worth inlining into the sorting code.

Requirements
============

Any singly-linked list structure. You provide the structure type,
and the name of the structure member to reach the next element, and
the address of the first element. The last element is identified by
having a null next pointer.

How to sort a list
==================

Call as `FAST_MERGE_SORT (LIST, TYPE, NEXT, LESS_THAN_OR_EQUAL_P)'.

`LIST' points to the first node in the list, or can be null.
Afterwards it is updated to point to the beginning of the sorted list.

`TYPE' is the type of each node in the linked list.

`NEXT' is the name of the structure member containing the links.
In C++, this can be a call to a member function which returns a
non-const reference.

`LESS_THAN_OR_EQUAL_P' is the name of a predicate which, given two
pointers to objects of type `TYPE', determines if the first is
less than or equal to the second. The equality test is important
to keep the sort stable (see earlier).

The total number of items must fit in `unsigned long'.

How to update a sorted list
===========================

A call is provided to sort a list, and combine that with another
which is already sorted. This is useful when you've sorted a list,
but later want to add some new elements. The code is optimised by
assuming that the already sorted list is likely to be the larger.

The already sorted list comes earlier in the input, for the purpose
of stable sorting. That is, if an element in the already sorted list
compares equal to one in the list to sort, the element from the
already sorted list will come first in the output.

Call as `FAST_MERGE_SORT_APPEND (ALREADY_SORTED, LIST, TYPE, NEXT,
LESS_THAN_OR_EQUAL_P)'.

`ALREADY_SORTED' points to the first node of an already sorted
list, or can be null. If the list isn't sorted already, the
result is undefined.

`LIST' points to the first node in the list to be sorted, or can
be null. Afterwards it is updated to point to the beginning of
the combined, sorted list.

Algorithm
=========

It identifies non-strictly ascending runs (those where Elt(n) <=
Elt(n+1)), and combines ascending runs in a hybrid of bottom-up and
non-recursive, top-down mergesort.

Robert Sedgewick, ``Algorithms in C'', says that merging runs incurs
more overhead in practice than it saves, because of the extra
processing to identify the runs, except when the list is very nearly
sorted. He says that is because the extra processing occurs in the
inner loop, which suggests that he means to repeatedly identify pairs
of runs and merge them, in a bottom-up process. (That is consistent
with the adjacent text). The implementation here only identifies the
runs once, so Robert's argument doesn't apply.

Five optimisations are implemented:

1. Runs of ascending elements are identified just once.

2. A stack of runs is maintained. This is analogous to the
explicit stack used in a non-recursive, top-down implementation.
However, the pure top-down algorithm could not identify runs of
ascending elements once, without requiring additional storage
for the runs' head nodes.

3. A top-down implementation must divide each list into two smaller
lists. This implementation does not do that.

4. A decision tree is used to sort up to three elements per run, so
the initial runs are all at least three elements long (except at
the end of the list).

5. Unnecessary memory writes are avoided by using two loops for
merging. Each loop identifies contiguous elements from one
input list. This also biases the code path to the contiguous
cases.

As well as being fast, identifying ascending runs allows the sort to
be "stable", meaning that objects that compare equal are kept in the
same relative order.

With some extra complexity, and overhead, it is possible to identify
alternating ascending and descending runs. That is not implemented
here because it wouldn't be useful for most applications.

Notes
=====

This loop forces initial runs to be at least 3 elements long, if
there are enough elements. I haven't properly tested if the extra
code for this is worthwhile. It seems to win most times, but not
all. It may be that even the code to force 2 element runs is
unnecessary. In one case, forcing at least 2 elements was about 5%
worse than forcing at least 3 elements _or_ accepting 1 element; the
latter two cases had very similar numbers of comparisons.

Need to count (a) time; (b) comparisons.

An earlier version of this thing was measured on serious real time
code, and was pretty fast. */

/* }}} */


#if !defined(__GNUC__) || defined(__STRICT_ANSI__)
#define FAST_MERGE_SORT(__LIST, __TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P) \
__FAST_MERGE_SORT(1, 0, __LIST, __TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P)
#define FAST_MERGE_SORT_APPEND(__ALREADY_SORTED, __LIST, \
__TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P) \
__FAST_MERGE_SORT(0, __ALREADY_SORTED, __LIST, \
__TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P)
#define __FAST_MERGE_SORT_LABEL(name) \
__FAST_MERGE_SORT_LABEL2 (name,__LINE__)
#define __FAST_MERGE_SORT_LABEL2(name,line) \
__FAST_MERGE_SORT_LABEL3 (name,line)
#ifdef __STDC__
#define __FAST_MERGE_SORT_LABEL3(name,line) \
__FAST_MERGE_SORT_ ## name ## _ ## line
#else
#define __FAST_MERGE_SORT_LABEL3(name,line) \
__FAST_MERGE_SORT_/**/name/**/_/**/line
#endif
#else
#define FAST_MERGE_SORT(__LIST, __TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P) \
do ({ \
__label__ __merge_1, __merge_2, __merge_done, __final_merge, __empty_list;\
__FAST_MERGE_SORT(1, 0, __LIST, __TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P); \
}); while (0)
#define FAST_MERGE_SORT_APPEND(__ALREADY_SORTED, __LIST, \
__TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P) \
do ({ \
__label__ __merge_1, __merge_2, __merge_done, __final_merge, __empty_list;\
__FAST_MERGE_SORT(0, __ALREADY_SORTED, __LIST, \
__TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P); \
}); while (0)
#define __FAST_MERGE_SORT_LABEL(name) __ ## name
#endif

#define __FAST_MERGE_SORT\
(__ASG, __ALREADY_SORTED, __LIST, __NODE_TYPE, __NEXT, __LESS_THAN_OR_EQUAL_P)\
do \
{ \
__NODE_TYPE * _stack [8 * sizeof (unsigned long)]; \
__NODE_TYPE ** _stack_ptr = _stack; \
unsigned long _run_number = 0UL - 1; \
register __NODE_TYPE * _list = (__LIST); \
register __NODE_TYPE * _current_run = (__ALREADY_SORTED); \
\
/* Handle zero length separately. */ \
if (__ASG && (_list == 0 || _list->__NEXT == 0)) \
break; \
if (!__ASG && _list == 0) \
goto __FAST_MERGE_SORT_LABEL (empty_list); \
\
while (_list != 0) \
{ \
/* Identify a run. Ensure that the run is at least three \
elements long, if there are three elements. Rearrange them \
to be in order, if necessary. */ \
*_stack_ptr++ = _current_run; \
_current_run = _list; \
\
/* This conditional makes the runs at least 2 long. */ \
if (_list->__NEXT != 0) \
{ \
if (__LESS_THAN_OR_EQUAL_P (_list, (_list->__NEXT))) \
_list = _list->__NEXT; \
else \
{ \
/* Exchange the first two elements. */ \
_current_run = _list->__NEXT; \
_list->__NEXT = _current_run->__NEXT; \
_current_run->__NEXT = _list; \
} \
\
/* This conditional makes the runs at least 3 long. */ \
if (_list->__NEXT != 0) \
{ \
if (__LESS_THAN_OR_EQUAL_P (_list, _list->__NEXT)) \
_list = _list->__NEXT; \
else \
{ \
/* Move the third element back to the right place. */ \
__NODE_TYPE * _tmp = _list->__NEXT; \
_list->__NEXT = _tmp->__NEXT; \
\
if (__LESS_THAN_OR_EQUAL_P (_current_run, _tmp)) \
{ \
_tmp->__NEXT = _list; \
_current_run->__NEXT = _tmp; \
} \
else \
{ \
_tmp->__NEXT = _current_run; \
_current_run->__NEXT = _list; \
_current_run = _tmp; \
} \
} \
\
/* Find more ascending elements. */ \
while (_list->__NEXT != 0 \
&& __LESS_THAN_OR_EQUAL_P (_list, \
(_list->__NEXT))) \
_list = _list->__NEXT; \
} \
} \
\
{ \
__NODE_TYPE * _tmp = _list->__NEXT; \
_list->__NEXT = 0; \
_list = _tmp; \
} \
\
/* Half the runs are pushed onto the stack without being \
merged until another run has been found. Test for that \
here to keep this bit of the loop fast. */ \
\
if (!(++_run_number & 1)) \
continue; \
\
/* Now merge the appropriate number of times. The idea is to \
merge pairs of runs, then pairs of those merged pairs, and \
so on. One strategy is to store a "merge depth" with each \
stack entry, indicating the number of merge operations done \
to produce that entry, and only merge the current run with \
the top one if they have the same merge depth. Another is \
to count the number of runs identified so far, and work out \
what to do from that count. The latter strategy is \
implemented here. \
\
The sequence 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, \
4, etc. is the number of merge operations to do after each \
run has been identified. That number is the same as the \
number of consecutive `1' bits at the bottom of \
`_run_number' here. */ \
\
__FAST_MERGE_SORT_LABEL (final_merge): \
{ \
unsigned long _tmp_run_number = _run_number; \
\
do \
{ \
/* Here, merge `_current_run' with the one on top of the \
stack. The new run is stored in `_current_run'. The \
order of arguments to the comparison function is \
important, in order for the sort to be stable. The \
run on the stack was earlier in the original list \
than `_current_run'. \
\
Two loops are used mainly to reduce the number of \
memory writes. They also bias the loops to run \
quickest where contiguous elements are taken from the \
same input list. */ \
\
register __NODE_TYPE * _other_run = *--_stack_ptr; \
__NODE_TYPE * _output_run = _other_run; \
register __NODE_TYPE ** _output_ptr = &_output_run; \
\
for (;;) \
{ \
if (__LESS_THAN_OR_EQUAL_P (_other_run, _current_run)) \
{ \
__FAST_MERGE_SORT_LABEL (merge_1): \
_output_ptr = &_other_run->__NEXT; \
_other_run = *_output_ptr; \
if (_other_run != 0) \
continue; \
*_output_ptr = _current_run; \
goto __FAST_MERGE_SORT_LABEL (merge_done); \
} \
*_output_ptr = _current_run; \
goto __FAST_MERGE_SORT_LABEL (merge_2); \
} \
\
/* The body of this loop is only reached by jumping \
into it. */ \
\
for (;;) \
{ \
if (!__LESS_THAN_OR_EQUAL_P (_other_run, _current_run)) \
{ \
__FAST_MERGE_SORT_LABEL (merge_2): \
_output_ptr = &_current_run->__NEXT; \
_current_run = *_output_ptr; \
if (_current_run != 0) \
continue; \
*_output_ptr = _other_run; \
goto __FAST_MERGE_SORT_LABEL (merge_done); \
} \
*_output_ptr = _other_run; \
goto __FAST_MERGE_SORT_LABEL (merge_1); \
} \
\
__FAST_MERGE_SORT_LABEL (merge_done): \
_current_run = _output_run; \
} \
while ((_tmp_run_number >>= 1) & 1); \
} \
} \
\
/* There are no more runs in the input list now. Just merge runs \
on the stack together until there is only one. Keep the code \
small by using the merge code in the main loop. These tests \
are outside the main loop to keep the main loop as fast and \
small as possible. This jumps to a point which shouldn't \
disrupt the quality of the main loop's compiled code too \
much. */ \
\
_run_number = (1UL << (_stack_ptr \
- (_stack + (__ASG || _stack [0] == 0)))) - 1; \
if (_run_number) \
goto __FAST_MERGE_SORT_LABEL (final_merge); \
\
__FAST_MERGE_SORT_LABEL (empty_list): \
(__LIST) = _current_run; \
} \
while (0)

#endif /* __fast_merge_sort_h */

2001-03-13 07:02:02

by James R Bruce

[permalink] [raw]
Subject: Re: quicksort for linked list


Hi again. The latter half of my email seems to have been forgotten in
the ensuing discussion, so I'll repost. For a linked list of any
non-floating point data, radix sort is almost impossible to beat; it's
iterative, fast (linear time for fixed size integers, worst case), can
be stopped early for partial sorting, and has a pretty simple
implementation.

I've been using essentially the same radix sort implementation I
posted before to sort 1000 item lists 60 times a second in a numerical
application, and it barely shows up in the total time used when
profiling. The other sorts I tried did not fare so well. I would
much rather see this in a kernel modification than any
merge/quick/heap sort implementations I've seen so far for linked
lists. OTOH, this conversation seems to have wandered out of
kernel-space anyway...

- Jim Bruce

(Examples at: http://www.cs.cmu.edu/~jbruce/sort.cc)

10-Mar-2001 Re: quicksort for linked list by David [email protected]
> For modern machines, I'm not sure that quicksort on a linked list is
> typically much cheaper than mergesort on a linked list. The
> majority of the potential cost is likely to be in the pointer
> chasing involved in bringing the lists into cache, and that will be
> the same for both. Once the list is in cache, how much pointer
> fiddling you do isn't so important. For lists that don't fit into
> cache, the advantages of mergesort should become even greater if the
> literature on tape and disk sorts applies (though multiway merges
> rather than simple binary merges would be needed to minimize the
> impact of memory latency).
>
> Given this, mergesort might be generally preferable to quicksort for
> linked lists. But I haven't investigated this idea thoroughly.
> (The trick described above for avoiding an explicit stack also works
> for mergesort.)