From: Yinghai Lu <[email protected]>
The bootmem allocator is no longer available for page_cgroup_init() because we
set up the kernel slab allocator much earlier now.
Cc: Ingo Molnar <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Linus Torvalds <[email protected]>
Signed-off-by: Yinghai Lu <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
---
mm/page_cgroup.c | 12 ++++++++----
1 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
index 791905c..3dd4a90 100644
--- a/mm/page_cgroup.c
+++ b/mm/page_cgroup.c
@@ -47,6 +47,8 @@ static int __init alloc_node_page_cgroup(int nid)
struct page_cgroup *base, *pc;
unsigned long table_size;
unsigned long start_pfn, nr_pages, index;
+ struct page *page;
+ unsigned int order;
start_pfn = NODE_DATA(nid)->node_start_pfn;
nr_pages = NODE_DATA(nid)->node_spanned_pages;
@@ -55,11 +57,13 @@ static int __init alloc_node_page_cgroup(int nid)
return 0;
table_size = sizeof(struct page_cgroup) * nr_pages;
-
- base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
- table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
- if (!base)
+ order = get_order(table_size);
+ page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
+ if (!page)
+ page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
+ if (!page)
return -ENOMEM;
+ base = page_address(page);
for (index = 0; index < nr_pages; index++) {
pc = base + index;
__init_page_cgroup(pc, start_pfn + index);
--
1.6.0.4
(This patch should have CCed memcg maitainers)
My box failed to boot due to initialization failure of page_cgroup, and
it's caused by this patch:
+ page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
I added a printk, and found that order == 11 == MAX_ORDER.
Pekka J Enberg wrote:
> From: Yinghai Lu <[email protected]>
>
> The bootmem allocator is no longer available for page_cgroup_init() because we
> set up the kernel slab allocator much earlier now.
>
> Cc: Ingo Molnar <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Signed-off-by: Yinghai Lu <[email protected]>
> Signed-off-by: Pekka Enberg <[email protected]>
> ---
> mm/page_cgroup.c | 12 ++++++++----
> 1 files changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
> index 791905c..3dd4a90 100644
> --- a/mm/page_cgroup.c
> +++ b/mm/page_cgroup.c
> @@ -47,6 +47,8 @@ static int __init alloc_node_page_cgroup(int nid)
> struct page_cgroup *base, *pc;
> unsigned long table_size;
> unsigned long start_pfn, nr_pages, index;
> + struct page *page;
> + unsigned int order;
>
> start_pfn = NODE_DATA(nid)->node_start_pfn;
> nr_pages = NODE_DATA(nid)->node_spanned_pages;
> @@ -55,11 +57,13 @@ static int __init alloc_node_page_cgroup(int nid)
> return 0;
>
> table_size = sizeof(struct page_cgroup) * nr_pages;
> -
> - base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
> - table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
> - if (!base)
> + order = get_order(table_size);
> + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
> + if (!page)
> + page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
> + if (!page)
> return -ENOMEM;
> + base = page_address(page);
> for (index = 0; index < nr_pages; index++) {
> pc = base + index;
> __init_page_cgroup(pc, start_pfn + index);
On Fri, 12 Jun 2009 10:50:00 +0800
Li Zefan <[email protected]> wrote:
> (This patch should have CCed memcg maitainers)
>
> My box failed to boot due to initialization failure of page_cgroup, and
> it's caused by this patch:
>
> + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
>
Oh, I don't know this patch ;(
> I added a printk, and found that order == 11 == MAX_ORDER.
>
maybe possible because this allocates countinous pages of 60%? length of
memmap.
If __alloc_bootmem_node_nopanic() is not available any more, memcg should be
only used under CONFIG_SPARSEMEM.
Is that a request from bootmem maintainer ?
Thanks,
-Kame
> Pekka J Enberg wrote:
> > From: Yinghai Lu <[email protected]>
> >
> > The bootmem allocator is no longer available for page_cgroup_init() because we
> > set up the kernel slab allocator much earlier now.
> >
> > Cc: Ingo Molnar <[email protected]>
> > Cc: Johannes Weiner <[email protected]>
> > Cc: Linus Torvalds <[email protected]>
> > Signed-off-by: Yinghai Lu <[email protected]>
> > Signed-off-by: Pekka Enberg <[email protected]>
> > ---
> > mm/page_cgroup.c | 12 ++++++++----
> > 1 files changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
> > index 791905c..3dd4a90 100644
> > --- a/mm/page_cgroup.c
> > +++ b/mm/page_cgroup.c
> > @@ -47,6 +47,8 @@ static int __init alloc_node_page_cgroup(int nid)
> > struct page_cgroup *base, *pc;
> > unsigned long table_size;
> > unsigned long start_pfn, nr_pages, index;
> > + struct page *page;
> > + unsigned int order;
> >
> > start_pfn = NODE_DATA(nid)->node_start_pfn;
> > nr_pages = NODE_DATA(nid)->node_spanned_pages;
> > @@ -55,11 +57,13 @@ static int __init alloc_node_page_cgroup(int nid)
> > return 0;
> >
> > table_size = sizeof(struct page_cgroup) * nr_pages;
> > -
> > - base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
> > - table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
> > - if (!base)
> > + order = get_order(table_size);
> > + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
> > + if (!page)
> > + page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
> > + if (!page)
> > return -ENOMEM;
> > + base = page_address(page);
> > for (index = 0; index < nr_pages; index++) {
> > pc = base + index;
> > __init_page_cgroup(pc, start_pfn + index);
>
>
On Fri, 12 Jun 2009 11:55:01 +0900
KAMEZAWA Hiroyuki <[email protected]> wrote:
> On Fri, 12 Jun 2009 10:50:00 +0800
> Li Zefan <[email protected]> wrote:
>
> > (This patch should have CCed memcg maitainers)
> >
> > My box failed to boot due to initialization failure of page_cgroup, and
> > it's caused by this patch:
> >
> > + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
> >
>
> Oh, I don't know this patch ;(
>
> > I added a printk, and found that order == 11 == MAX_ORDER.
> >
> maybe possible because this allocates countinous pages of 60%? length of
> memmap.
> If __alloc_bootmem_node_nopanic() is not available any more, memcg should be
> only used under CONFIG_SPARSEMEM.
>
> Is that a request from bootmem maintainer ?
>
In other words,
- Is there any replacment function to allocate continuous pages bigger
than MAX_ORDER ?
- If not, memcg (and io-controller under development) shouldn't support
memory model other than SPARSEMEM.
IIUC, page_cgroup_init() is called before mem_init() and we could use
alloc_bootmem() here.
Could someone teach me which thread should I read to know
"why alloc_bootmem() is gone ?" ?
Thanks,
-Kame
> Thanks,
> -Kame
>
>
> > Pekka J Enberg wrote:
> > > From: Yinghai Lu <[email protected]>
> > >
> > > The bootmem allocator is no longer available for page_cgroup_init() because we
> > > set up the kernel slab allocator much earlier now.
> > >
> > > Cc: Ingo Molnar <[email protected]>
> > > Cc: Johannes Weiner <[email protected]>
> > > Cc: Linus Torvalds <[email protected]>
> > > Signed-off-by: Yinghai Lu <[email protected]>
> > > Signed-off-by: Pekka Enberg <[email protected]>
> > > ---
> > > mm/page_cgroup.c | 12 ++++++++----
> > > 1 files changed, 8 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
> > > index 791905c..3dd4a90 100644
> > > --- a/mm/page_cgroup.c
> > > +++ b/mm/page_cgroup.c
> > > @@ -47,6 +47,8 @@ static int __init alloc_node_page_cgroup(int nid)
> > > struct page_cgroup *base, *pc;
> > > unsigned long table_size;
> > > unsigned long start_pfn, nr_pages, index;
> > > + struct page *page;
> > > + unsigned int order;
> > >
> > > start_pfn = NODE_DATA(nid)->node_start_pfn;
> > > nr_pages = NODE_DATA(nid)->node_spanned_pages;
> > > @@ -55,11 +57,13 @@ static int __init alloc_node_page_cgroup(int nid)
> > > return 0;
> > >
> > > table_size = sizeof(struct page_cgroup) * nr_pages;
> > > -
> > > - base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
> > > - table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
> > > - if (!base)
> > > + order = get_order(table_size);
> > > + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
> > > + if (!page)
> > > + page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
> > > + if (!page)
> > > return -ENOMEM;
> > > + base = page_address(page);
> > > for (index = 0; index < nr_pages; index++) {
> > > pc = base + index;
> > > __init_page_cgroup(pc, start_pfn + index);
> >
> >
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>
KAMEZAWA Hiroyuki wrote:
> On Fri, 12 Jun 2009 11:55:01 +0900
> KAMEZAWA Hiroyuki <[email protected]> wrote:
>
>> On Fri, 12 Jun 2009 10:50:00 +0800
>> Li Zefan <[email protected]> wrote:
>>
>>> (This patch should have CCed memcg maitainers)
>>>
>>> My box failed to boot due to initialization failure of page_cgroup, and
>>> it's caused by this patch:
>>>
>>> + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
>>>
>> Oh, I don't know this patch ;(
>>
>>> I added a printk, and found that order == 11 == MAX_ORDER.
>>>
>> maybe possible because this allocates countinous pages of 60%? length of
>> memmap.
>> If __alloc_bootmem_node_nopanic() is not available any more, memcg should be
>> only used under CONFIG_SPARSEMEM.
>>
>> Is that a request from bootmem maintainer ?
>>
> In other words,
> - Is there any replacment function to allocate continuous pages bigger
> than MAX_ORDER ?
> - If not, memcg (and io-controller under development) shouldn't support
> memory model other than SPARSEMEM.
>
> IIUC, page_cgroup_init() is called before mem_init() and we could use
> alloc_bootmem() here.
>
> Could someone teach me which thread should I read to know
> "why alloc_bootmem() is gone ?" ?
>
alloc_bootmem() is not gone, but slab allocator is setup much earlier now.
See this commit:
commit 83b519e8b9572c319c8e0c615ee5dd7272856090
Author: Pekka Enberg <[email protected]>
Date: Wed Jun 10 19:40:04 2009 +0300
slab: setup allocators earlier in the boot sequence
now page_cgroup_init() is called after mem_init().
On Fri, 12 Jun 2009 12:01:42 +0800
Li Zefan <[email protected]> wrote:
> alloc_bootmem() is not gone, but slab allocator is setup much earlier now.
> See this commit:
>
> commit 83b519e8b9572c319c8e0c615ee5dd7272856090
> Author: Pekka Enberg <[email protected]>
> Date: Wed Jun 10 19:40:04 2009 +0300
>
> slab: setup allocators earlier in the boot sequence
>
> now page_cgroup_init() is called after mem_init().
Ok, Li-san, could you test this on !SPARSEMEM config ?
x86-64 doesn't allow memory models other than SPARSEMEM.
This works well on SPARSEMEM.
I think FLATMEM should go away in future....but maybe never ;(
Thanks,
-Kame
==
From: KAMEZAWA Hiroyuki <[email protected]>
Now, SLAB is configured in very early stage and it can be used in
init routine now.
But replacing alloc_bootmem() in FLAT/DISCONTIGMEM's page_cgroup()
initialization breaks the allocation, now.
(Works well in SPARSEMEM case...it supports MEMORY_HOTPLUG and
Size of page_cgroup is in reasonable size (< 1 << MAX_ORDER.)
This patch revive FLATMEM+memory cgroup by using alloc_bootmem.
In future,
We stop to support FLATMEM (if no users) or rewrite codes for flatmem
completely. But this will adds more messy codes and (big) overheads.
Reported-by: Li Zefan <[email protected]>
Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
---
Index: linux-2.6.30.org/init/main.c
===================================================================
--- linux-2.6.30.org.orig/init/main.c
+++ linux-2.6.30.org/init/main.c
@@ -539,6 +539,11 @@ void __init __weak thread_info_cache_ini
*/
static void __init mm_init(void)
{
+ /*
+ * page_cgroup requires countinous pages as memmap
+ * and it's bigger than MAX_ORDER unless SPARSEMEM.
+ */
+ page_cgroup_init_flatmem();
mem_init();
kmem_cache_init();
vmalloc_init();
Index: linux-2.6.30.org/mm/page_cgroup.c
===================================================================
--- linux-2.6.30.org.orig/mm/page_cgroup.c
+++ linux-2.6.30.org/mm/page_cgroup.c
@@ -47,8 +47,6 @@ static int __init alloc_node_page_cgroup
struct page_cgroup *base, *pc;
unsigned long table_size;
unsigned long start_pfn, nr_pages, index;
- struct page *page;
- unsigned int order;
start_pfn = NODE_DATA(nid)->node_start_pfn;
nr_pages = NODE_DATA(nid)->node_spanned_pages;
@@ -57,13 +55,11 @@ static int __init alloc_node_page_cgroup
return 0;
table_size = sizeof(struct page_cgroup) * nr_pages;
- order = get_order(table_size);
- page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
- if (!page)
- page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
- if (!page)
+
+ base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
+ table_size, PAG_SIZE, __pa(MAX_DMA_ADDRESS));
+ if (!base)
return -ENOMEM;
- base = page_address(page);
for (index = 0; index < nr_pages; index++) {
pc = base + index;
__init_page_cgroup(pc, start_pfn + index);
@@ -73,7 +69,7 @@ static int __init alloc_node_page_cgroup
return 0;
}
-void __init page_cgroup_init(void)
+void __init page_cgroup_init_flatmem(void)
{
int nid, fail;
@@ -117,16 +113,11 @@ static int __init_refok init_section_pag
if (!section->page_cgroup) {
nid = page_to_nid(pfn_to_page(pfn));
table_size = sizeof(struct page_cgroup) * PAGES_PER_SECTION;
- if (slab_is_available()) {
- base = kmalloc_node(table_size,
- GFP_KERNEL | __GFP_NOWARN, nid);
- if (!base)
- base = vmalloc_node(table_size, nid);
- } else {
- base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
- table_size,
- PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
- }
+ VM_BUG_ON(!slab_is_available());
+ base = kmalloc_node(table_size,
+ GFP_KERNEL | __GFP_NOWARN, nid);
+ if (!base)
+ base = vmalloc_node(table_size, nid);
} else {
/*
* We don't have to allocate page_cgroup again, but
Index: linux-2.6.30.org/include/linux/page_cgroup.h
===================================================================
--- linux-2.6.30.org.orig/include/linux/page_cgroup.h
+++ linux-2.6.30.org/include/linux/page_cgroup.h
@@ -18,7 +18,19 @@ struct page_cgroup {
};
void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat);
-void __init page_cgroup_init(void);
+
+#ifdef CONFIG_SPARSEMEM
+static inline void __init page_cgroup_init_flatmem(void)
+{
+}
+extern void __init page_cgroup_init(void);
+#else
+void __init page_cgroup_init_flatmem(void)
+static inline void __init page_cgroup_init(void)
+{
+}
+#endif
+
struct page_cgroup *lookup_page_cgroup(struct page *page);
enum {
@@ -87,6 +99,10 @@ static inline void page_cgroup_init(void
{
}
+static inline void __init page_cgroup_init_flatmem(void)
+{
+}
+
#endif
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
KAMEZAWA Hiroyuki wrote:
> On Fri, 12 Jun 2009 12:01:42 +0800
> Li Zefan <[email protected]> wrote:
>
>> alloc_bootmem() is not gone, but slab allocator is setup much earlier now.
>> See this commit:
>>
>> commit 83b519e8b9572c319c8e0c615ee5dd7272856090
>> Author: Pekka Enberg <[email protected]>
>> Date: Wed Jun 10 19:40:04 2009 +0300
>>
>> slab: setup allocators earlier in the boot sequence
>>
>> now page_cgroup_init() is called after mem_init().
>
> Ok, Li-san, could you test this on !SPARSEMEM config ?
>
Yeah, the patch works. :)
Tested-by: Li Zefan <[email protected]>
Some comments below.
> x86-64 doesn't allow memory models other than SPARSEMEM.
> This works well on SPARSEMEM.
>
> I think FLATMEM should go away in future....but maybe never ;(
>
> Thanks,
> -Kame
> ==
> From: KAMEZAWA Hiroyuki <[email protected]>
>
> Now, SLAB is configured in very early stage and it can be used in
> init routine now.
>
> But replacing alloc_bootmem() in FLAT/DISCONTIGMEM's page_cgroup()
> initialization breaks the allocation, now.
> (Works well in SPARSEMEM case...it supports MEMORY_HOTPLUG and
> Size of page_cgroup is in reasonable size (< 1 << MAX_ORDER.)
>
> This patch revive FLATMEM+memory cgroup by using alloc_bootmem.
>
> In future,
> We stop to support FLATMEM (if no users) or rewrite codes for flatmem
> completely. But this will adds more messy codes and (big) overheads.
>
> Reported-by: Li Zefan <[email protected]>
> Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
> ---
> Index: linux-2.6.30.org/init/main.c
> ===================================================================
> --- linux-2.6.30.org.orig/init/main.c
> +++ linux-2.6.30.org/init/main.c
> @@ -539,6 +539,11 @@ void __init __weak thread_info_cache_ini
> */
> static void __init mm_init(void)
> {
> + /*
> + * page_cgroup requires countinous pages as memmap
> + * and it's bigger than MAX_ORDER unless SPARSEMEM.
checkpatch.pl complains:
ERROR: code indent should use tabs where possible
#107: FILE: init/main.c:543:
+ ^I * page_cgroup requires countinous pages as memmap$
ERROR: code indent should use tabs where possible
#108: FILE: init/main.c:544:
+ ^I * and it's bigger than MAX_ORDER unless SPARSEMEM.$
> + */
> + page_cgroup_init_flatmem();
> mem_init();
> kmem_cache_init();
> vmalloc_init();
> Index: linux-2.6.30.org/mm/page_cgroup.c
> ===================================================================
> --- linux-2.6.30.org.orig/mm/page_cgroup.c
> +++ linux-2.6.30.org/mm/page_cgroup.c
> @@ -47,8 +47,6 @@ static int __init alloc_node_page_cgroup
> struct page_cgroup *base, *pc;
> unsigned long table_size;
> unsigned long start_pfn, nr_pages, index;
> - struct page *page;
> - unsigned int order;
>
> start_pfn = NODE_DATA(nid)->node_start_pfn;
> nr_pages = NODE_DATA(nid)->node_spanned_pages;
> @@ -57,13 +55,11 @@ static int __init alloc_node_page_cgroup
> return 0;
>
> table_size = sizeof(struct page_cgroup) * nr_pages;
> - order = get_order(table_size);
> - page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
> - if (!page)
> - page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
> - if (!page)
> +
> + base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
> + table_size, PAG_SIZE, __pa(MAX_DMA_ADDRESS));
s/PAG_SIZE/PAGE_SIZE
> + if (!base)
> return -ENOMEM;
> - base = page_address(page);
> for (index = 0; index < nr_pages; index++) {
> pc = base + index;
> __init_page_cgroup(pc, start_pfn + index);
> @@ -73,7 +69,7 @@ static int __init alloc_node_page_cgroup
> return 0;
> }
>
> -void __init page_cgroup_init(void)
> +void __init page_cgroup_init_flatmem(void)
> {
>
> int nid, fail;
> @@ -117,16 +113,11 @@ static int __init_refok init_section_pag
> if (!section->page_cgroup) {
> nid = page_to_nid(pfn_to_page(pfn));
> table_size = sizeof(struct page_cgroup) * PAGES_PER_SECTION;
> - if (slab_is_available()) {
> - base = kmalloc_node(table_size,
> - GFP_KERNEL | __GFP_NOWARN, nid);
> - if (!base)
> - base = vmalloc_node(table_size, nid);
> - } else {
> - base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
> - table_size,
> - PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
> - }
> + VM_BUG_ON(!slab_is_available());
> + base = kmalloc_node(table_size,
> + GFP_KERNEL | __GFP_NOWARN, nid);
> + if (!base)
> + base = vmalloc_node(table_size, nid);
> } else {
> /*
> * We don't have to allocate page_cgroup again, but
> Index: linux-2.6.30.org/include/linux/page_cgroup.h
> ===================================================================
> --- linux-2.6.30.org.orig/include/linux/page_cgroup.h
> +++ linux-2.6.30.org/include/linux/page_cgroup.h
> @@ -18,7 +18,19 @@ struct page_cgroup {
> };
>
> void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat);
> -void __init page_cgroup_init(void);
> +
> +#ifdef CONFIG_SPARSEMEM
> +static inline void __init page_cgroup_init_flatmem(void)
> +{
> +}
> +extern void __init page_cgroup_init(void);
> +#else
> +void __init page_cgroup_init_flatmem(void)
tailing ';' is missing.
> +static inline void __init page_cgroup_init(void)
> +{
> +}
> +#endif
> +
> struct page_cgroup *lookup_page_cgroup(struct page *page);
>
> enum {
> @@ -87,6 +99,10 @@ static inline void page_cgroup_init(void
> {
> }
>
> +static inline void __init page_cgroup_init_flatmem(void)
> +{
> +}
> +
> #endif
>
> #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
>
On Fri, Jun 12, 2009 at 8:34 AM, KAMEZAWA
Hiroyuki<[email protected]> wrote:
> From: KAMEZAWA Hiroyuki <[email protected]>
>
> Now, SLAB is configured in very early stage and it can be used in
> init routine now.
>
> But replacing alloc_bootmem() in FLAT/DISCONTIGMEM's page_cgroup()
> initialization breaks the allocation, now.
> (Works well in SPARSEMEM case...it supports MEMORY_HOTPLUG and
> ?Size of page_cgroup is in reasonable size (< 1 << MAX_ORDER.)
>
> This patch revive FLATMEM+memory cgroup by using alloc_bootmem.
>
> In future,
> We stop to support FLATMEM (if no users) or rewrite codes for flatmem
> completely. But this will adds more messy codes and (big) overheads.
>
> Reported-by: Li Zefan <[email protected]>
> Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
Looks good to me!
Acked-by: Pekka Enberg <[email protected]>
Do you want me to push this to Linus or will you take care of it?
On Fri, 12 Jun 2009 09:21:52 +0300
Pekka Enberg <[email protected]> wrote:
> > In future,
> > We stop to support FLATMEM (if no users) or rewrite codes for flatmem
> > completely. But this will adds more messy codes and (big) overheads.
> >
> > Reported-by: Li Zefan <[email protected]>
> > Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
>
> Looks good to me!
>
> Acked-by: Pekka Enberg <[email protected]>
>
> Do you want me to push this to Linus or will you take care of it?
>
Could you please push this one ? Typos pointed out by Li Zefan is fixed.
Thank you all.
-Kame
==
From: KAMEZAWA Hiroyuki <[email protected]>
Now, SLAB is configured in very early stage and it can be used in
init routine now.
But replacing alloc_bootmem() in FLAT/DISCONTIGMEM's page_cgroup()
initialization breaks the allocation, now.
(Works well in SPARSEMEM case...it supports MEMORY_HOTPLUG and
size of page_cgroup is in reasonable size (< 1 << MAX_ORDER.)
This patch revive FLATMEM+memory cgroup by using alloc_bootmem.
In future,
We stop to support FLATMEM (if no users) or rewrite codes for flatmem
completely.But this will adds more messy codes and overheads.
Changelog: v1->v2
- fixed typos.
Acked-by: Pekka Enberg <[email protected]>
Tested-by: Li Zefan <[email protected]>
Reported-by: Li Zefan <[email protected]>
Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
---
include/linux/page_cgroup.h | 18 +++++++++++++++++-
init/main.c | 5 +++++
mm/page_cgroup.c | 29 ++++++++++-------------------
3 files changed, 32 insertions(+), 20 deletions(-)
Index: linux-2.6.30.org/init/main.c
===================================================================
--- linux-2.6.30.org.orig/init/main.c 2009-06-11 19:02:53.000000000 +0900
+++ linux-2.6.30.org/init/main.c 2009-06-11 20:49:21.000000000 +0900
@@ -539,6 +539,11 @@
*/
static void __init mm_init(void)
{
+ /*
+ * page_cgroup requires countinous pages as memmap
+ * and it's bigger than MAX_ORDER unless SPARSEMEM.
+ */
+ page_cgroup_init_flatmem();
mem_init();
kmem_cache_init();
vmalloc_init();
Index: linux-2.6.30.org/mm/page_cgroup.c
===================================================================
--- linux-2.6.30.org.orig/mm/page_cgroup.c 2009-06-11 19:02:53.000000000 +0900
+++ linux-2.6.30.org/mm/page_cgroup.c 2009-06-11 20:49:59.000000000 +0900
@@ -47,8 +47,6 @@
struct page_cgroup *base, *pc;
unsigned long table_size;
unsigned long start_pfn, nr_pages, index;
- struct page *page;
- unsigned int order;
start_pfn = NODE_DATA(nid)->node_start_pfn;
nr_pages = NODE_DATA(nid)->node_spanned_pages;
@@ -57,13 +55,11 @@
return 0;
table_size = sizeof(struct page_cgroup) * nr_pages;
- order = get_order(table_size);
- page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
- if (!page)
- page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
- if (!page)
+
+ base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
+ table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
+ if (!base)
return -ENOMEM;
- base = page_address(page);
for (index = 0; index < nr_pages; index++) {
pc = base + index;
__init_page_cgroup(pc, start_pfn + index);
@@ -73,7 +69,7 @@
return 0;
}
-void __init page_cgroup_init(void)
+void __init page_cgroup_init_flatmem(void)
{
int nid, fail;
@@ -117,16 +113,11 @@
if (!section->page_cgroup) {
nid = page_to_nid(pfn_to_page(pfn));
table_size = sizeof(struct page_cgroup) * PAGES_PER_SECTION;
- if (slab_is_available()) {
- base = kmalloc_node(table_size,
- GFP_KERNEL | __GFP_NOWARN, nid);
- if (!base)
- base = vmalloc_node(table_size, nid);
- } else {
- base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
- table_size,
- PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
- }
+ VM_BUG_ON(!slab_is_available());
+ base = kmalloc_node(table_size,
+ GFP_KERNEL | __GFP_NOWARN, nid);
+ if (!base)
+ base = vmalloc_node(table_size, nid);
} else {
/*
* We don't have to allocate page_cgroup again, but
Index: linux-2.6.30.org/include/linux/page_cgroup.h
===================================================================
--- linux-2.6.30.org.orig/include/linux/page_cgroup.h 2009-06-10 12:05:27.000000000 +0900
+++ linux-2.6.30.org/include/linux/page_cgroup.h 2009-06-11 20:50:32.000000000 +0900
@@ -18,7 +18,19 @@
};
void __meminit pgdat_page_cgroup_init(struct pglist_data *pgdat);
-void __init page_cgroup_init(void);
+
+#ifdef CONFIG_SPARSEMEM
+static inline void __init page_cgroup_init_flatmem(void)
+{
+}
+extern void __init page_cgroup_init(void);
+#else
+void __init page_cgroup_init_flatmem(void);
+static inline void __init page_cgroup_init(void)
+{
+}
+#endif
+
struct page_cgroup *lookup_page_cgroup(struct page *page);
enum {
@@ -87,6 +99,10 @@
{
}
+static inline void __init page_cgroup_init_flatmem(void)
+{
+}
+
#endif
#ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP
Li Zefan wrote:
> (This patch should have CCed memcg maitainers)
>
> My box failed to boot due to initialization failure of page_cgroup, and
> it's caused by this patch:
>
> + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
>
> I added a printk, and found that order == 11 == MAX_ORDER.
>
> Pekka J Enberg wrote:
>> From: Yinghai Lu <[email protected]>
>>
>> The bootmem allocator is no longer available for page_cgroup_init() because we
>> set up the kernel slab allocator much earlier now.
>>
>> Cc: Ingo Molnar <[email protected]>
>> Cc: Johannes Weiner <[email protected]>
>> Cc: Linus Torvalds <[email protected]>
>> Signed-off-by: Yinghai Lu <[email protected]>
>> Signed-off-by: Pekka Enberg <[email protected]>
>> ---
>> mm/page_cgroup.c | 12 ++++++++----
>> 1 files changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
>> index 791905c..3dd4a90 100644
>> --- a/mm/page_cgroup.c
>> +++ b/mm/page_cgroup.c
>> @@ -47,6 +47,8 @@ static int __init alloc_node_page_cgroup(int nid)
>> struct page_cgroup *base, *pc;
>> unsigned long table_size;
>> unsigned long start_pfn, nr_pages, index;
>> + struct page *page;
>> + unsigned int order;
>>
>> start_pfn = NODE_DATA(nid)->node_start_pfn;
>> nr_pages = NODE_DATA(nid)->node_spanned_pages;
>> @@ -55,11 +57,13 @@ static int __init alloc_node_page_cgroup(int nid)
>> return 0;
>>
>> table_size = sizeof(struct page_cgroup) * nr_pages;
>> -
>> - base = __alloc_bootmem_node_nopanic(NODE_DATA(nid),
>> - table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
>> - if (!base)
>> + order = get_order(table_size);
>> + page = alloc_pages_node(nid, GFP_NOWAIT | __GFP_ZERO, order);
>> + if (!page)
>> + page = alloc_pages_node(-1, GFP_NOWAIT | __GFP_ZERO, order);
This should potentially come with a KERN_WARNING indicating the page_cgroup now
is allocated out of the current node rather than the desired node. It'll help
debug potential issues later.
>> + if (!page)
>> return -ENOMEM;
>> + base = page_address(page);
>> for (index = 0; index < nr_pages; index++) {
>> pc = base + index;
>> __init_page_cgroup(pc, start_pfn + index);
Looks good to me, does it work for you, Yinghai? Kamezawa-San could you take a look
--
Balbir
KAMEZAWA Hiroyuki wrote:
> On Fri, 12 Jun 2009 09:21:52 +0300
> Pekka Enberg <[email protected]> wrote:
>>> In future,
>>> We stop to support FLATMEM (if no users) or rewrite codes for flatmem
>>> completely. But this will adds more messy codes and (big) overheads.
>>>
>>> Reported-by: Li Zefan <[email protected]>
>>> Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
>> Looks good to me!
>>
>> Acked-by: Pekka Enberg <[email protected]>
>>
>> Do you want me to push this to Linus or will you take care of it?
>>
> Could you please push this one ? Typos pointed out by Li Zefan is fixed.
>
> Thank you all.
> -Kame
> ==
> From: KAMEZAWA Hiroyuki <[email protected]>
>
> Now, SLAB is configured in very early stage and it can be used in
> init routine now.
>
> But replacing alloc_bootmem() in FLAT/DISCONTIGMEM's page_cgroup()
> initialization breaks the allocation, now.
> (Works well in SPARSEMEM case...it supports MEMORY_HOTPLUG and
> size of page_cgroup is in reasonable size (< 1 << MAX_ORDER.)
>
> This patch revive FLATMEM+memory cgroup by using alloc_bootmem.
>
> In future,
> We stop to support FLATMEM (if no users) or rewrite codes for flatmem
> completely.But this will adds more messy codes and overheads.
>
> Changelog: v1->v2
> - fixed typos.
>
> Acked-by: Pekka Enberg <[email protected]>
> Tested-by: Li Zefan <[email protected]>
> Reported-by: Li Zefan <[email protected]>
> Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
I see you've responded already, Thanks!
The diff is a bit confusing, was Pekka's patch already integrated, in my version
of mmotm, I don't see the alloc_pages_node() change in my source base.
But overall I agree with the change.
--
Balbir
On Fri, 2009-06-12 at 20:45 +0530, Balbir Singh wrote:
> > From: KAMEZAWA Hiroyuki <[email protected]>
> >
> > Now, SLAB is configured in very early stage and it can be used in
> > init routine now.
> >
> > But replacing alloc_bootmem() in FLAT/DISCONTIGMEM's page_cgroup()
> > initialization breaks the allocation, now.
> > (Works well in SPARSEMEM case...it supports MEMORY_HOTPLUG and
> > size of page_cgroup is in reasonable size (< 1 << MAX_ORDER.)
> >
> > This patch revive FLATMEM+memory cgroup by using alloc_bootmem.
> >
> > In future,
> > We stop to support FLATMEM (if no users) or rewrite codes for flatmem
> > completely.But this will adds more messy codes and overheads.
> >
> > Changelog: v1->v2
> > - fixed typos.
> >
> > Acked-by: Pekka Enberg <[email protected]>
> > Tested-by: Li Zefan <[email protected]>
> > Reported-by: Li Zefan <[email protected]>
> > Signed-off-by: KAMEZAWA Hiroyuki <[email protected]>
>
> I see you've responded already, Thanks!
>
> The diff is a bit confusing, was Pekka's patch already integrated, in my version
> of mmotm, I don't see the alloc_pages_node() change in my source base.
Yes, my patch hit mainline on Thursday or so and this patch is now in as well.
Pekka