this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
SLOB allocator exclusively, so it must work on all platforms)
Signed-off-by: Ingo Molnar <[email protected]>
Index: linux/mm/slob.c
===================================================================
--- linux.orig/mm/slob.c
+++ linux/mm/slob.c
@@ -198,7 +198,7 @@ void kfree(const void *block)
if (!block)
return;
- if (!((unsigned int)block & (PAGE_SIZE-1))) {
+ if (!((unsigned long)block & (PAGE_SIZE-1))) {
/* might be on the big block list */
spin_lock_irqsave(&block_lock, flags);
for (bb = bigblocks; bb; last = &bb->next, bb = bb->next) {
@@ -227,7 +227,7 @@ unsigned int ksize(const void *block)
if (!block)
return 0;
- if (!((unsigned int)block & (PAGE_SIZE-1))) {
+ if (!((unsigned long)block & (PAGE_SIZE-1))) {
spin_lock_irqsave(&block_lock, flags);
for (bb = bigblocks; bb; bb = bb->next)
if (bb->pages == block) {
@@ -326,7 +326,7 @@ void kmem_cache_init(void)
void *p = slob_alloc(PAGE_SIZE, 0, PAGE_SIZE-1);
if (p)
- free_page((unsigned int)p);
+ free_page((unsigned long)p);
mod_timer(&slob_timer, jiffies + HZ);
}
On Sunday 11 December 2005 09:12, Ingo Molnar wrote:
> this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> SLOB allocator exclusively, so it must work on all platforms)
Its a good idea to get this working everywhere. Why have you switched to
use SLOB exclusively?
Thanks
Ed Tomlinson
On Sun, Dec 11, 2005 at 03:12:17PM +0100, Ingo Molnar wrote:
>
> this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> SLOB allocator exclusively, so it must work on all platforms)
The patch looks fine, but what's this about using SLOB exclusively?
Fragmentation performance of SLOB is miserable on anything like a
modern desktop, I think SLOB only makes sense for small machines. The
locking also suggests dual core at most.
Anyway,
> Signed-off-by: Ingo Molnar <[email protected]>
Acked-by: Matt Mackall <[email protected]>
>
> Index: linux/mm/slob.c
> ===================================================================
> --- linux.orig/mm/slob.c
> +++ linux/mm/slob.c
> @@ -198,7 +198,7 @@ void kfree(const void *block)
> if (!block)
> return;
>
> - if (!((unsigned int)block & (PAGE_SIZE-1))) {
> + if (!((unsigned long)block & (PAGE_SIZE-1))) {
> /* might be on the big block list */
> spin_lock_irqsave(&block_lock, flags);
> for (bb = bigblocks; bb; last = &bb->next, bb = bb->next) {
> @@ -227,7 +227,7 @@ unsigned int ksize(const void *block)
> if (!block)
> return 0;
>
> - if (!((unsigned int)block & (PAGE_SIZE-1))) {
> + if (!((unsigned long)block & (PAGE_SIZE-1))) {
> spin_lock_irqsave(&block_lock, flags);
> for (bb = bigblocks; bb; bb = bb->next)
> if (bb->pages == block) {
> @@ -326,7 +326,7 @@ void kmem_cache_init(void)
> void *p = slob_alloc(PAGE_SIZE, 0, PAGE_SIZE-1);
>
> if (p)
> - free_page((unsigned int)p);
> + free_page((unsigned long)p);
>
> mod_timer(&slob_timer, jiffies + HZ);
> }
--
Mathematics is the supreme nostalgia of our time.
* Matt Mackall <[email protected]> wrote:
> On Sun, Dec 11, 2005 at 03:12:17PM +0100, Ingo Molnar wrote:
> >
> > this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> > with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> > SLOB allocator exclusively, so it must work on all platforms)
>
> The patch looks fine, but what's this about using SLOB exclusively?
> Fragmentation performance of SLOB is miserable on anything like a
> modern desktop, I think SLOB only makes sense for small machines. The
> locking also suggests dual core at most.
well, this is only an -rt artifact: the SLOB needs zero modifications to
work on PREEMPT_RT, while SLAB needed a risky 66K monster patch. Until
someone simplifies the SLAB conversion to PREEMPT_RT, i'll use the SLOB.
i havent noticed any significant slowdown due to the SLOB. In any case,
we'll give it some workout which should further speed up its upstream
integration - it's looking good so far.
Ingo
* Ed Tomlinson <[email protected]> wrote:
> On Sunday 11 December 2005 09:12, Ingo Molnar wrote:
> > this patch fixes 32-bitness bugs in mm/slob.c. Successfully booted x64
> > with SLOB enabled. (i have switched the PREEMPT_RT feature to use the
> > SLOB allocator exclusively, so it must work on all platforms)
>
> Its a good idea to get this working everywhere. Why have you switched
> to use SLOB exclusively?
because the SLAB hacks were getting ugly, and i gave up on it during the
2.6.15-rc5 merge. (The SLAB code has lots of irqs-off / per-cpu and
non-preempt assumptions integrated, which were a pain to sort out.)
We'll eventually do a cleaner conversion of SLAB to PREEMPT_RT, but for
now the SLOB is turned on exclusively if PREEMPT_RT. (in other
preemption modes it's optionally selectable if EMBEDDED is enabled)
Ingo