2005-12-07 21:56:38

by Christoph Lameter

[permalink] [raw]
Subject: [PATCH] swap migration: Fix lru drain

isolate_page() currently uses an IPI to notify other processors that the lru
caches need to be drained if the page cannot be found on the LRU. The IPI
interrupt may interrupt a processor that is just processing lru requests
and cause a race condition.

This patch introduces a new function run_on_each_cpu() that uses the keventd()
to run the LRU draining on each processor. Processors disable preemption
when dealing the LRU caches (these are per processor) and thus executing
LRU draining from another process is safe.

Thanks to Lee Schermerhorn <[email protected]> for finding this race
condition.

This makes the

preserve-irq-status-in-release_pages-__pagevec_lru_add.patch

in Andrews tree no longer necessary.

Signed-off-by: Christoph Lameter <[email protected]>

Index: linux-2.6.15-rc5-mm1/mm/vmscan.c
===================================================================
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c 2005-12-05 11:32:21.000000000 -0800
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c 2005-12-05 13:02:27.000000000 -0800
@@ -1038,7 +1038,7 @@ redo:
* Maybe this page is still waiting for a cpu to drain it
* from one of the lru lists?
*/
- on_each_cpu(lru_add_drain_per_cpu, NULL, 0, 1);
+ schedule_on_each_cpu(lru_add_drain_per_cpu, NULL);
if (PageLRU(page))
goto redo;
}
Index: linux-2.6.15-rc5-mm1/include/linux/workqueue.h
===================================================================
--- linux-2.6.15-rc5-mm1.orig/include/linux/workqueue.h 2005-12-03 21:10:42.000000000 -0800
+++ linux-2.6.15-rc5-mm1/include/linux/workqueue.h 2005-12-05 13:02:07.000000000 -0800
@@ -65,6 +65,7 @@ extern int FASTCALL(schedule_work(struct
extern int FASTCALL(schedule_delayed_work(struct work_struct *work, unsigned long delay));

extern int schedule_delayed_work_on(int cpu, struct work_struct *work, unsigned long delay);
+extern void schedule_on_each_cpu(void (*func)(void *info), void *info);
extern void flush_scheduled_work(void);
extern int current_is_keventd(void);
extern int keventd_up(void);
Index: linux-2.6.15-rc5-mm1/kernel/workqueue.c
===================================================================
--- linux-2.6.15-rc5-mm1.orig/kernel/workqueue.c 2005-12-05 11:15:24.000000000 -0800
+++ linux-2.6.15-rc5-mm1/kernel/workqueue.c 2005-12-06 17:50:44.000000000 -0800
@@ -424,6 +424,19 @@ int schedule_delayed_work_on(int cpu,
return ret;
}

+void schedule_on_each_cpu(void (*func) (void *info), void *info)
+{
+ int cpu;
+ struct work_struct * work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);
+
+ for_each_online_cpu(cpu) {
+ INIT_WORK(work + cpu, func, info);
+ __queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu), work + cpu);
+ }
+ flush_workqueue(keventd_wq);
+ kfree(work);
+}
+
void flush_scheduled_work(void)
{
flush_workqueue(keventd_wq);


2005-12-07 23:27:27

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

Christoph Lameter wrote:
> isolate_page() currently uses an IPI to notify other processors that the lru
> caches need to be drained if the page cannot be found on the LRU. The IPI
> interrupt may interrupt a processor that is just processing lru requests
> and cause a race condition.
>
> This patch introduces a new function run_on_each_cpu() that uses the keventd()
> to run the LRU draining on each processor. Processors disable preemption
> when dealing the LRU caches (these are per processor) and thus executing
> LRU draining from another process is safe.
>

Couple of comments:

> ===================================================================
> --- linux-2.6.15-rc5-mm1.orig/kernel/workqueue.c 2005-12-05 11:15:24.000000000 -0800
> +++ linux-2.6.15-rc5-mm1/kernel/workqueue.c 2005-12-06 17:50:44.000000000 -0800
> @@ -424,6 +424,19 @@ int schedule_delayed_work_on(int cpu,
> return ret;
> }
>
> +void schedule_on_each_cpu(void (*func) (void *info), void *info)
> +{
> + int cpu;
> + struct work_struct * work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);
> +

Do we need a lock_cpu_hotplug() around here?

> + for_each_online_cpu(cpu) {
> + INIT_WORK(work + cpu, func, info);
> + __queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu), work + cpu);
> + }
> + flush_workqueue(keventd_wq);
> + kfree(work);
> +}
> +

Can't this deadlock if 2 CPUs each send work to the other?

--
SUSE Labs, Novell Inc.

2005-12-08 00:12:04

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

Christoph Lameter <[email protected]> wrote:
>
> isolate_page()

There's no such function. You're referring to isolate_lru_page().

btw, lru_add_drain() appears to be identical to lru_drain_cache().

> currently uses an IPI to notify other processors that the lru
> caches need to be drained if the page cannot be found on the LRU. The IPI
> interrupt may interrupt a processor that is just processing lru requests
> and cause a race condition.
>
> This patch introduces a new function run_on_each_cpu() that uses the keventd()
> to run the LRU draining on each processor. Processors disable preemption
> when dealing the LRU caches (these are per processor) and thus executing
> LRU draining from another process is safe.
>
> Thanks to Lee Schermerhorn <[email protected]> for finding this race
> condition.
>
> This makes the
>
> preserve-irq-status-in-release_pages-__pagevec_lru_add.patch
>
> in Andrews tree no longer necessary.

Why not just extend the irq protection into lru_cache_add[_active]() and
lru_add_drain()?

Answer: because
preserve-irq-status-in-release_pages-__pagevec_lru_add.patch sucks, and
extending it in this manner sucks more.

Being able to push everything up to process context as you're proposing
puts all the suckiness into this slowpath, so fine.

> +void schedule_on_each_cpu(void (*func) (void *info), void *info)
> +{
> + int cpu;
> + struct work_struct * work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);
> +
> + for_each_online_cpu(cpu) {
> + INIT_WORK(work + cpu, func, info);
> + __queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu), work + cpu);
> + }
> + flush_workqueue(keventd_wq);
> + kfree(work);
> +}

Normally it's poor form for a library function to assume it can use
GFP_KERNEL. But in this case, the allocation has such a huge upper-bound
that there's no reasonable alternative, so OK.

kmalloc() can return NULL, y'know.

80-col xterms, please.

2005-12-08 00:41:54

by Christoph Lameter

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

On Wed, 7 Dec 2005, Andrew Morton wrote:

> > in Andrews tree no longer necessary.
>
> Why not just extend the irq protection into lru_cache_add[_active]() and
> lru_add_drain()?
>
> Answer: because
> preserve-irq-status-in-release_pages-__pagevec_lru_add.patch sucks, and
> extending it in this manner sucks more.

The concern was that doing so would mean disabling interrupts in more
cases.

> Being able to push everything up to process context as you're proposing
> puts all the suckiness into this slowpath, so fine.

Right.

> Normally it's poor form for a library function to assume it can use
> GFP_KERNEL. But in this case, the allocation has such a huge upper-bound
> that there's no reasonable alternative, so OK.

flush_workqueue() can sleep. gfp atomic is not a possbility.

> kmalloc() can return NULL, y'know.

Fixed. Here is a new version:

isolate_lru_page() currently uses an IPI to notify other processors that
the lru caches need to be drained if the page cannot be found on the LRU.
The IPI interrupt may interrupt a processor that is just processing lru
requests and cause a race condition.

This patch introduces a new function run_on_each_cpu() that uses the keventd()
to run the LRU draining on each processor. Processors disable preemption
when dealing the LRU caches (these are per processor) and thus executing
LRU draining from another process is safe.

Thanks to Lee Schermerhorn <[email protected]> for finding this race
condition.

Signed-off-by: Christoph Lameter <[email protected]>

Index: linux-2.6.15-rc5-mm1/mm/vmscan.c
===================================================================
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c 2005-12-05 11:32:21.000000000 -0800
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c 2005-12-06 18:03:24.000000000 -0800
@@ -1038,7 +1038,7 @@ redo:
* Maybe this page is still waiting for a cpu to drain it
* from one of the lru lists?
*/
- on_each_cpu(lru_add_drain_per_cpu, NULL, 0, 1);
+ schedule_on_each_cpu(lru_add_drain_per_cpu, NULL);
if (PageLRU(page))
goto redo;
}
Index: linux-2.6.15-rc5-mm1/include/linux/workqueue.h
===================================================================
--- linux-2.6.15-rc5-mm1.orig/include/linux/workqueue.h 2005-12-03 21:10:42.000000000 -0800
+++ linux-2.6.15-rc5-mm1/include/linux/workqueue.h 2005-12-07 16:29:47.000000000 -0800
@@ -65,6 +65,7 @@ extern int FASTCALL(schedule_work(struct
extern int FASTCALL(schedule_delayed_work(struct work_struct *work, unsigned long delay));

extern int schedule_delayed_work_on(int cpu, struct work_struct *work, unsigned long delay);
+extern int schedule_on_each_cpu(void (*func)(void *info), void *info);
extern void flush_scheduled_work(void);
extern int current_is_keventd(void);
extern int keventd_up(void);
Index: linux-2.6.15-rc5-mm1/kernel/workqueue.c
===================================================================
--- linux-2.6.15-rc5-mm1.orig/kernel/workqueue.c 2005-12-05 11:15:24.000000000 -0800
+++ linux-2.6.15-rc5-mm1/kernel/workqueue.c 2005-12-07 16:38:54.000000000 -0800
@@ -424,6 +424,25 @@ int schedule_delayed_work_on(int cpu,
return ret;
}

+int schedule_on_each_cpu(void (*func) (void *info), void *info)
+{
+ int cpu;
+ struct work_struct *work;
+
+ work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);
+
+ if (!work)
+ return 0;
+ for_each_online_cpu(cpu) {
+ INIT_WORK(work + cpu, func, info);
+ __queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu),
+ work + cpu);
+ }
+ flush_workqueue(keventd_wq);
+ kfree(work);
+ return 1;
+}
+
void flush_scheduled_work(void)
{
flush_workqueue(keventd_wq);

2005-12-08 00:43:30

by Christoph Lameter

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

On Thu, 8 Dec 2005, Nick Piggin wrote:

> Do we need a lock_cpu_hotplug() around here?

Well, then we may need that lock for each "for_each_online_cpu" use?

> Can't this deadlock if 2 CPUs each send work to the other

Then we would need to fix the workqueue flushing function.

2005-12-08 00:56:10

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

Christoph Lameter <[email protected]> wrote:
>
> +int schedule_on_each_cpu(void (*func) (void *info), void *info)
> +{
> + int cpu;
> + struct work_struct *work;
> +
> + work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);
> +
> + if (!work)
> + return 0;
> + for_each_online_cpu(cpu) {
> + INIT_WORK(work + cpu, func, info);
> + __queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu),
> + work + cpu);
> + }
> + flush_workqueue(keventd_wq);
> + kfree(work);
> + return 1;
> +}

I'll change this to return 0 on success, or -ENOMEM. Bit more
conventional, no?

2005-12-08 00:56:45

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

Christoph Lameter <[email protected]> wrote:
>
> On Thu, 8 Dec 2005, Nick Piggin wrote:
>
> > Do we need a lock_cpu_hotplug() around here?
>
> Well, then we may need that lock for each "for_each_online_cpu" use?
>

I suppose so..

> > Can't this deadlock if 2 CPUs each send work to the other
>
> Then we would need to fix the workqueue flushing function.

I don't think I can see a deadlock here.

2005-12-08 00:59:46

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

Christoph Lameter wrote:

>On Thu, 8 Dec 2005, Nick Piggin wrote:
>
>
>>Do we need a lock_cpu_hotplug() around here?
>>
>
>Well, then we may need that lock for each "for_each_online_cpu" use?
>

I think it depends on where and how it is used?

eg. for statistics gathering it doesn't matter so much. In this
case it would seem that you do want an actual online CPU... though
on looking at the workqueue code it seems that some of it would be
racy in a similar way, so perhaps this is handled elsewhere (I
can't see how, though).

>
>
>>Can't this deadlock if 2 CPUs each send work to the other
>>
>
>Then we would need to fix the workqueue flushing function.
>
>

Oh, you're right.

2005-12-08 01:14:46

by Christoph Lameter

[permalink] [raw]
Subject: Re: [PATCH] swap migration: Fix lru drain

On Wed, 7 Dec 2005, Andrew Morton wrote:

> I'll change this to return 0 on success, or -ENOMEM. Bit more
> conventional, no?

Ok. That also allows the addition of other error conditions in the future.
Need to revise isolate_lru_page to reflect that.

Index: linux-2.6.15-rc5-mm1/mm/vmscan.c
===================================================================
--- linux-2.6.15-rc5-mm1.orig/mm/vmscan.c 2005-12-06 18:03:24.000000000 -0800
+++ linux-2.6.15-rc5-mm1/mm/vmscan.c 2005-12-07 17:11:58.000000000 -0800
@@ -1038,8 +1038,8 @@ redo:
* Maybe this page is still waiting for a cpu to drain it
* from one of the lru lists?
*/
- schedule_on_each_cpu(lru_add_drain_per_cpu, NULL);
- if (PageLRU(page))
+ rc = schedule_on_each_cpu(lru_add_drain_per_cpu, NULL);
+ if (rc == 0 && PageLRU(page))
goto redo;
}
return rc;
Index: linux-2.6.15-rc5-mm1/kernel/workqueue.c
===================================================================
--- linux-2.6.15-rc5-mm1.orig/kernel/workqueue.c 2005-12-07 16:38:54.000000000 -0800
+++ linux-2.6.15-rc5-mm1/kernel/workqueue.c 2005-12-07 17:13:04.000000000 -0800
@@ -432,7 +432,7 @@ int schedule_on_each_cpu(void (*func) (v
work = kmalloc(NR_CPUS * sizeof(struct work_struct), GFP_KERNEL);

if (!work)
- return 0;
+ return -ENOMEM;
for_each_online_cpu(cpu) {
INIT_WORK(work + cpu, func, info);
__queue_work(per_cpu_ptr(keventd_wq->cpu_wq, cpu),
@@ -440,7 +440,7 @@ int schedule_on_each_cpu(void (*func) (v
}
flush_workqueue(keventd_wq);
kfree(work);
- return 1;
+ return 0;
}

void flush_scheduled_work(void)

2005-12-08 05:05:24

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [Lhms-devel] Re: [PATCH] swap migration: Fix lru drain

Christoph Lameter wrote:
> On Wed, 7 Dec 2005, Andrew Morton wrote:
>
>
>>I'll change this to return 0 on success, or -ENOMEM. Bit more
>>conventional, no?
>
>
> Ok. That also allows the addition of other error conditions in the future.
> Need to revise isolate_lru_page to reflect that.
>
I think this 'schedule_on_each_cpu(lru_add_drain_per_cpu, NULL);' will be used
by memory-hot-removing code and some other codes.
How about move this to swap.c and name as 'lru_add_drain_all()' ?
(but there is no other users now....)

-- Kame

2005-12-08 08:39:37

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [Lhms-devel] Re: [PATCH] swap migration: Fix lru drain

Christoph Lameter wrote:
> On Wed, 7 Dec 2005, Andrew Morton wrote:
>
>
>>I'll change this to return 0 on success, or -ENOMEM. Bit more
>>conventional, no?
>
>
> Ok. That also allows the addition of other error conditions in the future.
> Need to revise isolate_lru_page to reflect that.
>
This patch was needed to compile.

-- Kame
==
Index: hotremove-2.6.15-rc5-mm1/include/linux/workqueue.h
===================================================================
--- hotremove-2.6.15-rc5-mm1.orig/include/linux/workqueue.h 2005-12-08 17:32:18.000000000 +0900
+++ hotremove-2.6.15-rc5-mm1/include/linux/workqueue.h 2005-12-08 17:32:43.000000000 +0900
@@ -65,7 +65,7 @@
extern int FASTCALL(schedule_delayed_work(struct work_struct *work, unsigned long delay));

extern int schedule_delayed_work_on(int cpu, struct work_struct *work, unsigned long delay);
-extern void schedule_on_each_cpu(void (*func)(void *info), void *info);
+extern int schedule_on_each_cpu(void (*func)(void *info), void *info);
extern void flush_scheduled_work(void);
extern int current_is_keventd(void);
extern int keventd_up(void);

2005-12-08 08:46:35

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [Lhms-devel] Re: [PATCH] swap migration: Fix lru drain

KAMEZAWA Hiroyuki wrote:
> Christoph Lameter wrote:
>
>> On Wed, 7 Dec 2005, Andrew Morton wrote:
>>
>>
>>> I'll change this to return 0 on success, or -ENOMEM. Bit more
>>> conventional, no?
>>
>>
>>
>> Ok. That also allows the addition of other error conditions in the
>> future.
>> Need to revise isolate_lru_page to reflect that.
>>
> This patch was needed to compile.
Sorry ,I missed Christoph's previous patch..
ignore a patch which I sent.

-- Kame