Hi,
I've found that this:
[mm/page_alloc.c]
static inline void set_page_refs(struct page *page, int order)
{
#ifdef CONFIG_MMU
set_page_count(page, 1);
#else
int i;
/*
* We need to reference all the pages for this order, otherwise if
* anyone accesses one of the pages with (get/put) it will be freed.
*/
for (i = 0; i < (1 << order); i++)
set_page_count(page+i, 1);
#endif /* CONFIG_MMU */
}
Causes problems if !CONFIG_MMU because __free_pages_ok()/free_pages_check()
reports a bad page on the second page when it comes time to free it:
Bad page state at __free_pages_ok (in process 'events/0', page c08132e0)
flags:0x20000000 mapping:00000000 mapcount:0 count:1
Why is doing this necessary at all? No one should be touching the individual
pages of a block allocation. The kernel should defend itself against
userspace trying to munmap part of a multipage mmap.
I think this should be:
static inline void set_page_refs(struct page *page, int order)
{
set_page_count(page, 1);
}
It seems to work for me. If no one disagrees, I'll give akpm a patch for this.
David
Hi David,
David Howells wrote:
> I've found that this:
>
> [mm/page_alloc.c]
> static inline void set_page_refs(struct page *page, int order)
> {
> #ifdef CONFIG_MMU
> set_page_count(page, 1);
> #else
> int i;
>
> /*
> * We need to reference all the pages for this order, otherwise if
> * anyone accesses one of the pages with (get/put) it will be freed.
> */
> for (i = 0; i < (1 << order); i++)
> set_page_count(page+i, 1);
> #endif /* CONFIG_MMU */
> }
>
> Causes problems if !CONFIG_MMU because __free_pages_ok()/free_pages_check()
> reports a bad page on the second page when it comes time to free it:
>
> Bad page state at __free_pages_ok (in process 'events/0', page c08132e0)
> flags:0x20000000 mapping:00000000 mapcount:0 count:1
That itself can be fixed by not checking the page_counts in
__free_pages_ok():
@@ -219,7 +219,9 @@
{
if ( page_mapped(page) ||
page->mapping != NULL ||
+#ifdef CONFIG_MMU
page_count(page) != 0 ||
+#endif
(page->flags & (
1 << PG_lru |
1 << PG_private |
> Why is doing this necessary at all? No one should be touching the individual
> pages of a block allocation. The kernel should defend itself against
> userspace trying to munmap part of a multipage mmap.
I don't recall right now why we had to do this originally.
It was absolutely neccessary once, but I don't think we need
this anymore. At least it runs fine on m68knommu targets now
without this.
Regards
Greg
------------------------------------------------------------------------
Greg Ungerer -- Chief Software Dude EMAIL: [email protected]
SnapGear -- a CyberGuard Company PHONE: +61 7 3435 2888
825 Stanley St, FAX: +61 7 3891 3630
Woolloongabba, QLD, 4102, Australia WEB: http://www.SnapGear.com
> > Why is doing this necessary at all? No one should be touching the
> > individual pages of a block allocation. The kernel should defend itself
> > against userspace trying to munmap part of a multipage mmap.
>
> I don't recall right now why we had to do this originally.
> It was absolutely neccessary once, but I don't think we need
> this anymore. At least it runs fine on m68knommu targets now
> without this.
I've just found out what it's necessary for. access_process_vm() as called
from, say, /proc/cmdline.
David