Following patches fix two memory leaks with CPU hotplug.
Some per CPU data is allocated each time a CPU is set online.
But this space is never freed.
Usually this memory leak is not a big deal (for normal CPU hotplug usage).
But during stress tests with lots of CPU offline/online cycles this
really matters.
The entire leak is 40K (10 pages) for each offline/online cycle per
CPU. I've verified both fixes performing more than 90000 CPU
offline/online cycles.
This is not a regression but I think it's still 2.6.27 material.
Please apply.
Regards,
Andreas
BTW, there is still a leak of about 600 bytes per offline/online cycle
per CPU -- according to slabinfo this might be sysfs related. The most
suspicious entries are:
Name Objects Objsize Space Slabs/Part/Cpu O/S O %Fr %Ef Flg
sysfs_dir_cache 266085 80 41.9M 10235/4/0 26 0 0 50 PZFU
kmalloc-8 130870 8 10.5M 2568/5/0 51 0 0 9 PZFU
pda->irqstackptr is allocated whenever a CPU is set online.
But it is never freed. This results in a memory leak of 16K
for each CPU offline/online cycle.
Fix is to allocate pda->irqstackptr only once.
Signed-off-by: Andreas Herrmann <[email protected]>
---
arch/x86/kernel/cpu/common_64.c | 15 +++++++++------
1 files changed, 9 insertions(+), 6 deletions(-)
This is a re-submission of a patch posted last week. See
http://marc.info/?l=linux-kernel&m=121760147831093
Patch is against 2.6.27-rc2.
diff --git a/arch/x86/kernel/cpu/common_64.c b/arch/x86/kernel/cpu/common_64.c
index dd6e3f1..c941397 100644
--- a/arch/x86/kernel/cpu/common_64.c
+++ b/arch/x86/kernel/cpu/common_64.c
@@ -493,17 +493,20 @@ void pda_init(int cpu)
/* others are initialized in smpboot.c */
pda->pcurrent = &init_task;
pda->irqstackptr = boot_cpu_stack;
+ pda->irqstackptr += IRQSTACKSIZE - 64;
} else {
- pda->irqstackptr = (char *)
- __get_free_pages(GFP_ATOMIC, IRQSTACK_ORDER);
- if (!pda->irqstackptr)
- panic("cannot allocate irqstack for cpu %d", cpu);
+ if (!pda->irqstackptr) {
+ pda->irqstackptr = (char *)
+ __get_free_pages(GFP_ATOMIC, IRQSTACK_ORDER);
+ if (!pda->irqstackptr)
+ panic("cannot allocate irqstack for cpu %d",
+ cpu);
+ pda->irqstackptr += IRQSTACKSIZE - 64;
+ }
if (pda->nodenumber == 0 && cpu_to_node(cpu) != NUMA_NO_NODE)
pda->nodenumber = cpu_to_node(cpu);
}
-
- pda->irqstackptr += IRQSTACKSIZE-64;
}
char boot_exception_stacks[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ +
--
1.5.6.4
Exception stacks are allocated each time a CPU is set online.
But the allocated space is never freed. Thus with one CPU hotplug
offline/online cycle there is a memory leak of 24K (6 pages) for
a CPU.
Fix is to allocate exception stacks only once -- when the CPU is
set online for the first time.
Signed-off-by: Andreas Herrmann <[email protected]>
---
arch/x86/kernel/cpu/common_64.c | 23 +++++++++++++----------
1 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/cpu/common_64.c b/arch/x86/kernel/cpu/common_64.c
index c941397..a5b9600 100644
--- a/arch/x86/kernel/cpu/common_64.c
+++ b/arch/x86/kernel/cpu/common_64.c
@@ -604,19 +604,22 @@ void __cpuinit cpu_init(void)
/*
* set up and load the per-CPU TSS
*/
- for (v = 0; v < N_EXCEPTION_STACKS; v++) {
+ if (!orig_ist->ist[0]) {
static const unsigned int order[N_EXCEPTION_STACKS] = {
- [0 ... N_EXCEPTION_STACKS - 1] = EXCEPTION_STACK_ORDER,
- [DEBUG_STACK - 1] = DEBUG_STACK_ORDER
+ [0 ... N_EXCEPTION_STACKS - 1] = EXCEPTION_STACK_ORDER,
+ [DEBUG_STACK - 1] = DEBUG_STACK_ORDER
};
- if (cpu) {
- estacks = (char *)__get_free_pages(GFP_ATOMIC, order[v]);
- if (!estacks)
- panic("Cannot allocate exception stack %ld %d\n",
- v, cpu);
+ for (v = 0; v < N_EXCEPTION_STACKS; v++) {
+ if (cpu) {
+ estacks = (char *)__get_free_pages(GFP_ATOMIC, order[v]);
+ if (!estacks)
+ panic("Cannot allocate exception "
+ "stack %ld %d\n", v, cpu);
+ }
+ estacks += PAGE_SIZE << order[v];
+ orig_ist->ist[v] = t->x86_tss.ist[v] =
+ (unsigned long)estacks;
}
- estacks += PAGE_SIZE << order[v];
- orig_ist->ist[v] = t->x86_tss.ist[v] = (unsigned long)estacks;
}
t->x86_tss.io_bitmap_base = offsetof(struct tss_struct, io_bitmap);
--
1.5.6.4
* Andreas Herrmann <[email protected]> wrote:
> Following patches fix two memory leaks with CPU hotplug. Some per CPU
> data is allocated each time a CPU is set online. But this space is
> never freed.
>
> Usually this memory leak is not a big deal (for normal CPU hotplug
> usage). But during stress tests with lots of CPU offline/online cycles
> this really matters.
>
> The entire leak is 40K (10 pages) for each offline/online cycle per
> CPU. I've verified both fixes performing more than 90000 CPU
> offline/online cycles.
applied to tip/x86/core, thanks Andreas.
> This is not a regression but I think it's still 2.6.27 material.
> Please apply.
it's tricky code so i guess it's best to let it cook in tip/master a
bit. If it does not show up upstream by say -rc4 time could you please
ping us about it?
Ingo