2021-05-19 17:39:08

by Zhaoyang Huang

[permalink] [raw]
Subject: [PATCH v3] psi: fix race between psi_trigger_create and psimon

From: Zhaoyang Huang <[email protected]>

Race detected between psimon_new and psimon_old as shown below, which
cause panic by accessing invalid psi_system->poll_wait->wait_queue_entry
and psi_system->poll_timer->entry->next. Under this modification, the
race window is removed by initialising poll_wait and poll_timer in
group_init which are executed only once at beginning.

psi_trigger_create psimon_new psimon_old
init_waitqueue_head finish_wait
spin_lock(lock_old)
spin_lock_init(lock_new)
wake_up_process(psimon_new)

finish_wait
spin_lock(lock_new)
list_del list_del

Signed-off-by: ziwei.dai <[email protected]>
Signed-off-by: ke.wang <[email protected]>
Signed-off-by: Zhaoyang Huang <[email protected]>
---
v2: change del_timer_sync to del_timer in psi_trigger_destroy
v3: remove timer_setup within psi_tirgger_create
protect del_timer by extending the critical section of mutex_lock
---
---
kernel/sched/psi.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index cc25a3c..7b53217 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -182,6 +182,8 @@ struct psi_group psi_system = {

static void psi_avgs_work(struct work_struct *work);

+static void poll_timer_fn(struct timer_list *t);
+
static void group_init(struct psi_group *group)
{
int cpu;
@@ -201,6 +203,8 @@ static void group_init(struct psi_group *group)
memset(group->polling_total, 0, sizeof(group->polling_total));
group->polling_next_update = ULLONG_MAX;
group->polling_until = 0;
+ init_waitqueue_head(&group->poll_wait);
+ timer_setup(&group->poll_timer, poll_timer_fn, 0);
rcu_assign_pointer(group->poll_task, NULL);
}

@@ -1157,9 +1161,7 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
return ERR_CAST(task);
}
atomic_set(&group->poll_wakeup, 0);
- init_waitqueue_head(&group->poll_wait);
wake_up_process(task);
- timer_setup(&group->poll_timer, poll_timer_fn, 0);
rcu_assign_pointer(group->poll_task, task);
}

@@ -1214,16 +1216,8 @@ static void psi_trigger_destroy(struct kref *ref)
}
}

- mutex_unlock(&group->trigger_lock);
-
- /*
- * Wait for both *trigger_ptr from psi_trigger_replace and
- * poll_task RCUs to complete their read-side critical sections
- * before destroying the trigger and optionally the poll_task
- */
- synchronize_rcu();
/*
- * Destroy the kworker after releasing trigger_lock to prevent a
+ * Destroy psimon after releasing trigger_lock to prevent a
* deadlock while waiting for psi_poll_work to acquire trigger_lock
*/
if (task_to_destroy) {
@@ -1233,9 +1227,20 @@ static void psi_trigger_destroy(struct kref *ref)
* But it might have been already scheduled before
* that - deschedule it cleanly before destroying it.
*/
- del_timer_sync(&group->poll_timer);
+ del_timer(&group->poll_timer);
+ mutex_unlock(&group->trigger_lock);
kthread_stop(task_to_destroy);
+ } else {
+ mutex_unlock(&group->trigger_lock);
}
+
+ /*
+ * Wait for both *trigger_ptr from psi_trigger_replace and
+ * poll_task RCUs to complete their read-side critical sections
+ * before destroying the trigger and optionally the poll_task
+ */
+ synchronize_rcu();
+
kfree(t);
}

--
1.9.1



2021-05-19 18:25:34

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3] psi: fix race between psi_trigger_create and psimon

On Tue, May 18, 2021 at 2:09 AM Huangzhaoyang <[email protected]> wrote:
>
> From: Zhaoyang Huang <[email protected]>
>
> Race detected between psimon_new and psimon_old as shown below, which
> cause panic by accessing invalid psi_system->poll_wait->wait_queue_entry
> and psi_system->poll_timer->entry->next. Under this modification, the
> race window is removed by initialising poll_wait and poll_timer in
> group_init which are executed only once at beginning.
>
> psi_trigger_create psimon_new psimon_old
> init_waitqueue_head finish_wait
> spin_lock(lock_old)
> spin_lock_init(lock_new)
> wake_up_process(psimon_new)
>
> finish_wait
> spin_lock(lock_new)
> list_del list_del
>

Please add this line in your description (see my comment in
https://lore.kernel.org/patchwork/patch/1430982/#1627215):

Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger
scheduling mechanism")

Generally, before posting a new version of your patch please follow these rules:
1. Read all the comments you already received and address all of them.
2. If you are unclear or disagree with a comment you need to reply to
it instead of posting a new version.

> Signed-off-by: ziwei.dai <[email protected]>
> Signed-off-by: ke.wang <[email protected]>
> Signed-off-by: Zhaoyang Huang <[email protected]>
> ---
> v2: change del_timer_sync to del_timer in psi_trigger_destroy
> v3: remove timer_setup within psi_tirgger_create
> protect del_timer by extending the critical section of mutex_lock
> ---
> ---
> kernel/sched/psi.c | 29 +++++++++++++++++------------
> 1 file changed, 17 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
> index cc25a3c..7b53217 100644
> --- a/kernel/sched/psi.c
> +++ b/kernel/sched/psi.c
> @@ -182,6 +182,8 @@ struct psi_group psi_system = {
>
> static void psi_avgs_work(struct work_struct *work);
>
> +static void poll_timer_fn(struct timer_list *t);
> +
> static void group_init(struct psi_group *group)
> {
> int cpu;
> @@ -201,6 +203,8 @@ static void group_init(struct psi_group *group)
> memset(group->polling_total, 0, sizeof(group->polling_total));
> group->polling_next_update = ULLONG_MAX;
> group->polling_until = 0;
> + init_waitqueue_head(&group->poll_wait);
> + timer_setup(&group->poll_timer, poll_timer_fn, 0);
> rcu_assign_pointer(group->poll_task, NULL);
> }
>
> @@ -1157,9 +1161,7 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
> return ERR_CAST(task);
> }
> atomic_set(&group->poll_wakeup, 0);
> - init_waitqueue_head(&group->poll_wait);
> wake_up_process(task);
> - timer_setup(&group->poll_timer, poll_timer_fn, 0);
> rcu_assign_pointer(group->poll_task, task);
> }
>
> @@ -1214,16 +1216,8 @@ static void psi_trigger_destroy(struct kref *ref)
> }
> }
>
> - mutex_unlock(&group->trigger_lock);
> -
> - /*
> - * Wait for both *trigger_ptr from psi_trigger_replace and
> - * poll_task RCUs to complete their read-side critical sections
> - * before destroying the trigger and optionally the poll_task
> - */
> - synchronize_rcu();
> /*
> - * Destroy the kworker after releasing trigger_lock to prevent a
> + * Destroy psimon after releasing trigger_lock to prevent a
> * deadlock while waiting for psi_poll_work to acquire trigger_lock
> */
> if (task_to_destroy) {
> @@ -1233,9 +1227,20 @@ static void psi_trigger_destroy(struct kref *ref)
> * But it might have been already scheduled before
> * that - deschedule it cleanly before destroying it.
> */
> - del_timer_sync(&group->poll_timer);
> + del_timer(&group->poll_timer);

Why are you refactoring the code instead of just moving del_timer()
into the trigger_lock protected area like I suggested in
https://lore.kernel.org/patchwork/patch/1429498/#1627196 ? Please
avoid unnecessary complexity and churn.

> + mutex_unlock(&group->trigger_lock);
> kthread_stop(task_to_destroy);
> + } else {
> + mutex_unlock(&group->trigger_lock);
> }
> +
> + /*
> + * Wait for both *trigger_ptr from psi_trigger_replace and
> + * poll_task RCUs to complete their read-side critical sections
> + * before destroying the trigger and optionally the poll_task
> + */
> + synchronize_rcu();
> +
> kfree(t);
> }
>
> --
> 1.9.1
>