2008-06-02 07:29:48

by Paul Mundt

[permalink] [raw]
Subject: [PATCH] nommu: fix kobjsize() for SLOB and SLUB, v2.

kobjsize() has been abusing page->index as a method for sorting out
compound order, which blows up both for page cache pages, and SLOB's
reuse of the index in struct slob_page.

Presently we are not able to accurately size arbitrary pointers that
don't come from kmalloc(), so the best we can do is sort out the
compound order from the head page if it's a compound page, or default
to 0-order if it's impossible to ksize() the object.

Obviously this leaves quite a bit to be desired in terms of object
sizing accuracy, but the behaviour is unchanged over the existing
implementation, while fixing the page->index oopses originally reported
here:

http://marc.info/?l=linux-mm&m=121127773325245&w=2

Accuracy could also be improved by having SLUB and SLOB both set PG_slab
on ksizeable pages, rather than just handling the __GFP_COMP cases
irregardless of the PG_slab setting, as made possibly with Pekka's
patches:

http://marc.info/?l=linux-kernel&m=121139439900534&w=2
http://marc.info/?l=linux-kernel&m=121139440000537&w=2
http://marc.info/?l=linux-kernel&m=121139440000540&w=2

This is primarily a bugfix for nommu systems for 2.6.26, with the aim
being to gradually kill off kobjsize() and its particular brand of
object abuse entirely.

Reviewed-by: Pekka Enberg <[email protected]>
Signed-off-by: Paul Mundt <[email protected]>

---

mm/nommu.c | 30 ++++++++++++++++++++++++++----
1 file changed, 26 insertions(+), 4 deletions(-)

diff --git a/mm/nommu.c b/mm/nommu.c
index dca93fc..3abd084 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -104,21 +104,43 @@ EXPORT_SYMBOL(vmtruncate);
unsigned int kobjsize(const void *objp)
{
struct page *page;
+ int order = 0;

/*
* If the object we have should not have ksize performed on it,
* return size of 0
*/
- if (!objp || (unsigned long)objp >= memory_end || !((page = virt_to_page(objp))))
+ if (!objp)
return 0;

+ if ((unsigned long)objp >= memory_end)
+ return 0;
+
+ page = virt_to_head_page(objp);
+ if (!page)
+ return 0;
+
+ /*
+ * If the allocator sets PageSlab, we know the pointer came from
+ * kmalloc().
+ */
if (PageSlab(page))
return ksize(objp);

- BUG_ON(page->index < 0);
- BUG_ON(page->index >= MAX_ORDER);
+ /*
+ * The ksize() function is only guaranteed to work for pointers
+ * returned by kmalloc(). So handle arbitrary pointers, that we expect
+ * always to be compound pages, here.
+ */
+ if (PageCompound(page))
+ order = compound_order(page);

- return (PAGE_SIZE << page->index);
+ /*
+ * Finally, handle arbitrary pointers that don't set PageSlab.
+ * Default to 0-order in the case when we're unable to ksize()
+ * the object.
+ */
+ return PAGE_SIZE << order;
}

/*


2008-06-05 16:38:32

by David Howells

[permalink] [raw]
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB, v2.


Paul Mundt <[email protected]> wrote:

> kobjsize() has been abusing page->index as a method for sorting out
> compound order, which blows up both for page cache pages, and SLOB's
> reuse of the index in struct slob_page.
>
> Presently we are not able to accurately size arbitrary pointers that
> don't come from kmalloc(), so the best we can do is sort out the
> compound order from the head page if it's a compound page, or default
> to 0-order if it's impossible to ksize() the object.
>
> Obviously this leaves quite a bit to be desired in terms of object
> sizing accuracy, but the behaviour is unchanged over the existing
> implementation, while fixing the page->index oopses originally reported
> here:
>
> http://marc.info/?l=linux-mm&m=121127773325245&w=2
>
> Accuracy could also be improved by having SLUB and SLOB both set PG_slab
> on ksizeable pages, rather than just handling the __GFP_COMP cases
> irregardless of the PG_slab setting, as made possibly with Pekka's
> patches:
>
> http://marc.info/?l=linux-kernel&m=121139439900534&w=2
> http://marc.info/?l=linux-kernel&m=121139440000537&w=2
> http://marc.info/?l=linux-kernel&m=121139440000540&w=2
>
> This is primarily a bugfix for nommu systems for 2.6.26, with the aim
> being to gradually kill off kobjsize() and its particular brand of
> object abuse entirely.
>
> Reviewed-by: Pekka Enberg <[email protected]>
> Signed-off-by: Paul Mundt <[email protected]>

With this patch, SLOB now works on my FRV board.

Acked-by: David Howells <[email protected]>

2008-06-10 17:30:52

by Christoph Lameter

[permalink] [raw]
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB, v2.

On Mon, 2 Jun 2008, Paul Mundt wrote:

> + page = virt_to_head_page(objp);
> + if (!page)
> + return 0;

virt_to_head_page cannot return NULL. virt_to_page also does not return
NULL. pfn_valid() needs to be used to figure out if a page is valid.
Otherwise the page struct reference that was returned may have
PageReserved() set to indicate that it is not a valid page.

> + * The ksize() function is only guaranteed to work for pointers
> + * returned by kmalloc(). So handle arbitrary pointers, that we expect
> + * always to be compound pages, here.
> + */
> + if (PageCompound(page))
> + order = compound_order(page);

compund order returns 0 if you use compound_order() on a
non compound page. No need for the PageCompound test.

2008-06-10 18:18:41

by Vegard Nossum

[permalink] [raw]
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB, v2.

On 6/10/08, Christoph Lameter <[email protected]> wrote:
> On Mon, 2 Jun 2008, Paul Mundt wrote:
>
> > + page = virt_to_head_page(objp);
> > + if (!page)
> > + return 0;
>
>
> virt_to_head_page cannot return NULL. virt_to_page also does not return
> NULL. pfn_valid() needs to be used to figure out if a page is valid.
> Otherwise the page struct reference that was returned may have
> PageReserved() set to indicate that it is not a valid page.
>

Are you sure it shouldn't be virt_addr_valid()? If yes, can you please
explain the difference? Thanks :-)


Vegard

--
"The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it
disguises that the error is the programmer's own creation."
-- E. W. Dijkstra, EWD1036

2008-06-10 18:30:29

by Christoph Lameter

[permalink] [raw]
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB, v2.

On Tue, 10 Jun 2008, Vegard Nossum wrote:

> Are you sure it shouldn't be virt_addr_valid()? If yes, can you please
> explain the difference? Thanks :-)

from include/asm-x86/page.h

#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)

#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)


Yes virt_addr_valid is better to use. I was thinking too much using the
basic ops.

2008-06-11 07:36:18

by Paul Mundt

[permalink] [raw]
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB, v2.

On Tue, Jun 10, 2008 at 10:30:36AM -0700, Christoph Lameter wrote:
> On Mon, 2 Jun 2008, Paul Mundt wrote:
>
> > + page = virt_to_head_page(objp);
> > + if (!page)
> > + return 0;
>
> virt_to_head_page cannot return NULL. virt_to_page also does not return
> NULL. pfn_valid() needs to be used to figure out if a page is valid.
> Otherwise the page struct reference that was returned may have
> PageReserved() set to indicate that it is not a valid page.
>
> > + * The ksize() function is only guaranteed to work for pointers
> > + * returned by kmalloc(). So handle arbitrary pointers, that we expect
> > + * always to be compound pages, here.
> > + */
> > + if (PageCompound(page))
> > + order = compound_order(page);
>
> compund order returns 0 if you use compound_order() on a
> non compound page. No need for the PageCompound test.
>

Thanks for the review. Using virt_addr_valid() we can also get rid of
blackfin's idiotic > memory_end test for its DMA addresses, as the same
logic is encapsulated there. How does this look?

---

mm/nommu.c | 21 +++------------------
1 file changed, 3 insertions(+), 18 deletions(-)

diff --git a/mm/nommu.c b/mm/nommu.c
index 3abd084..4462b6a 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -104,21 +104,15 @@ EXPORT_SYMBOL(vmtruncate);
unsigned int kobjsize(const void *objp)
{
struct page *page;
- int order = 0;

/*
* If the object we have should not have ksize performed on it,
* return size of 0
*/
- if (!objp)
- return 0;
-
- if ((unsigned long)objp >= memory_end)
+ if (!objp || !virt_addr_valid(objp))
return 0;

page = virt_to_head_page(objp);
- if (!page)
- return 0;

/*
* If the allocator sets PageSlab, we know the pointer came from
@@ -129,18 +123,9 @@ unsigned int kobjsize(const void *objp)

/*
* The ksize() function is only guaranteed to work for pointers
- * returned by kmalloc(). So handle arbitrary pointers, that we expect
- * always to be compound pages, here.
- */
- if (PageCompound(page))
- order = compound_order(page);
-
- /*
- * Finally, handle arbitrary pointers that don't set PageSlab.
- * Default to 0-order in the case when we're unable to ksize()
- * the object.
+ * returned by kmalloc(). So handle arbitrary pointers here.
*/
- return PAGE_SIZE << order;
+ return PAGE_SIZE << compound_order(page);
}

/*

2008-06-11 23:22:47

by Christoph Lameter

[permalink] [raw]
Subject: Re: [PATCH] nommu: fix kobjsize() for SLOB and SLUB, v2.

On Wed, 11 Jun 2008, Paul Mundt wrote:

> Thanks for the review. Using virt_addr_valid() we can also get rid of
> blackfin's idiotic > memory_end test for its DMA addresses, as the same
> logic is encapsulated there. How does this look?

Reviewed-by: Christoph Lameter <[email protected]>