2022-07-08 21:37:03

by Schspa Shi

[permalink] [raw]
Subject: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks

Commit 95158a89dd50 ("sched,rt: Use the full cpumask for balancing")
allow find_lock_lowest_rq to pick a task with migration disabled.
This commit is intended to push the current running task on this CPU
away.

There is a race scenarios, which allows a migration disabled task to
be migrated to another CPU.

When there is a RT task with higher priority, rt sched class was
intended to migrate higher priority task to lowest rq via push_rt_tasks,
this BUG will happen here.

With the system running on PREEMPT_RT, rt_spin_lock will disable
migration, this will make the problem easier to reproduce.

I have seen this crash on PREEMPT_RT, from the logs, there is a race
when trying to migrate higher priority tasks to the lowest rq.

Please refer to the following scenarios.

CPU0 CPU1
------------------------------------------------------------------
push_rt_task
check is_migration_disabled(next_task)
task not running and
migration_disabled == 0
find_lock_lowest_rq(next_task, rq);
_double_lock_balance(this_rq, busiest);
raw_spin_rq_unlock(this_rq);
double_rq_lock(this_rq, busiest);
<<wait for busiest rq>>
<wakeup>
task become running
migrate_disable();
<context out>
deactivate_task(rq, next_task, 0);
set_task_cpu(next_task, lowest_rq->cpu);
WARN_ON_ONCE(is_migration_disabled(p));
---------OOPS-------------

Crash logs as fellowing:
[123671.996430] WARNING: CPU: 2 PID: 13470 at kernel/sched/core.c:2485
set_task_cpu+0x8c/0x108
[123671.996800] pstate: 20400009 (nzCv daif +PAN -UAO -TCO BTYPE=--)
[123671.996811] pc : set_task_cpu+0x8c/0x108
[123671.996820] lr : set_task_cpu+0x7c/0x108
[123671.996828] sp : ffff80001268bd30
[123671.996832] pmr_save: 00000060
[123671.996835] x29: ffff80001268bd30 x28: ffff0001a3d68e80
[123671.996844] x27: ffff80001225f4a8 x26: ffff800010ab62cb
[123671.996854] x25: ffff80026d95e000 x24: 0000000000000005
[123671.996864] x23: ffff00019746c1b0 x22: 0000000000000000
[123671.996873] x21: ffff00027ee33a80 x20: 0000000000000000
[123671.996882] x19: ffff00019746ba00 x18: 0000000000000000
[123671.996890] x17: 0000000000000000 x16: 0000000000000000
[123671.996899] x15: 000000000000000a x14: 000000000000349e
[123671.996908] x13: ffff800012f4503d x12: 0000000000000001
[123671.996916] x11: 0000000000000000 x10: 0000000000000000
[123671.996925] x9 : 00000000000c0000 x8 : ffff00027ee58700
[123671.996933] x7 : ffff00027ee8da80 x6 : ffff00027ee8e580
[123671.996942] x5 : ffff00027ee8dcc0 x4 : 0000000000000005
[123671.996951] x3 : ffff00027ee8e338 x2 : 0000000000000000
[123671.996959] x1 : 00000000000000ff x0 : 0000000000000002
[123671.996969] Call trace:
[123671.996975] set_task_cpu+0x8c/0x108
[123671.996984] push_rt_task.part.0+0x144/0x184
[123671.996995] push_rt_tasks+0x28/0x3c
[123671.997002] task_woken_rt+0x58/0x68
[123671.997009] ttwu_do_wakeup+0x5c/0xd0
[123671.997019] ttwu_do_activate+0xc0/0xd4
[123671.997028] try_to_wake_up+0x244/0x288
[123671.997036] wake_up_process+0x18/0x24
[123671.997045] __irq_wake_thread+0x64/0x80
[123671.997056] __handle_irq_event_percpu+0x110/0x124
[123671.997064] handle_irq_event_percpu+0x50/0xac
[123671.997072] handle_irq_event+0x84/0xfc

To fix it, we need to check migration_disabled flag again to avoid
bad migration.

Fixes: 95158a89dd50 ("sched,rt: Use the full cpumask for balancing")

Signed-off-by: Schspa Shi <[email protected]>

--

Changelog:
v1 -> v2:
- Modify commit message to add fixed commit information.
- Going to retry to push the current running task on this CPU
away, instead doing nothing for this migrate disabled task.
v2 -> v3:
- Change migration disabled check to the correct position
v3 -> v4:
- Check migrate disabled in find_lock_lowest_rq to avoid not
necessary check when task rq is not released as Steven adviced.
---
kernel/sched/deadline.c | 1 +
kernel/sched/rt.c | 3 +++
2 files changed, 4 insertions(+)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index b5152961b7432..cb3b886a081c3 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2238,6 +2238,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
!cpumask_test_cpu(later_rq->cpu, &task->cpus_mask) ||
task_running(rq, task) ||
!dl_task(task) ||
+ is_migration_disabled(task) ||
!task_on_rq_queued(task))) {
double_unlock_balance(rq, later_rq);
later_rq = NULL;
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 8c9ed96648409..0e0bc9aeaa394 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1998,11 +1998,14 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
* the mean time, task could have
* migrated already or had its affinity changed.
* Also make sure that it wasn't scheduled on its rq.
+ * It is possible the task has running for a while,
+ * And we check task migration disable flag here.
*/
if (unlikely(task_rq(task) != rq ||
!cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
task_running(rq, task) ||
!rt_task(task) ||
+ is_migration_disabled(task) ||
!task_on_rq_queued(task))) {

double_unlock_balance(rq, lowest_rq);
--
2.37.0


2022-07-08 21:40:39

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks

On Sat, 9 Jul 2022 05:17:54 +0800
Schspa Shi <[email protected]> wrote:

> +++ b/kernel/sched/rt.c
> @@ -1998,11 +1998,14 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
> * the mean time, task could have
> * migrated already or had its affinity changed.
> * Also make sure that it wasn't scheduled on its rq.
> + * It is possible the task has running for a while,

I don't understand the "running for a while" part. That doesn't make sense.

The only way this can happen is that it was scheduled, set
"migrate_disabled" and then got preempted where it's no longer on the run
queue.

-- Steve


> + * And we check task migration disable flag here.
> */
> if (unlikely(task_rq(task) != rq ||
> !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
> task_running(rq, task) ||
> !rt_task(task) ||
> + is_migration_disabled(task) ||
> !task_on_rq_queued(task))) {
>
> double_unlock_balance(rq, lowest_rq);

2022-07-08 21:59:19

by Schspa Shi

[permalink] [raw]
Subject: [PATCH v4 2/2] sched/rt: Trying to push current task when target disable migrating

When the task to push disable migration, retry to push the current
running task on this CPU away, instead doing nothing for this migrate
disabled task.

Signed-off-by: Schspa Shi <[email protected]>
---
kernel/sched/core.c | 6 +++++-
kernel/sched/rt.c | 6 ++++++
2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index da0bf6fe9ecdc..0b1fefd97d874 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2509,8 +2509,12 @@ int push_cpu_stop(void *arg)
if (p->sched_class->find_lock_rq)
lowest_rq = p->sched_class->find_lock_rq(p, rq);

- if (!lowest_rq)
+ if (!lowest_rq) {
+ if (unlikely(is_migration_disabled(p)))
+ p->migration_flags |= MDF_PUSH;
+
goto out_unlock;
+ }

// XXX validate p is still the highest prio task
if (task_rq(p) == rq) {
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 0e0bc9aeaa394..9a91adf486e34 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2135,6 +2135,12 @@ static int push_rt_task(struct rq *rq, bool pull)
*/
task = pick_next_pushable_task(rq);
if (task == next_task) {
+ /*
+ * If next task has now disabled migrating, see if we
+ * can push the current task.
+ */
+ if (unlikely(is_migration_disabled(task)))
+ goto retry;
/*
* The task hasn't migrated, and is still the next
* eligible task, but we failed to find a run-queue
--
2.37.0

2022-07-08 22:03:00

by Schspa Shi

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks


Steven Rostedt <[email protected]> writes:

> On Sat, 9 Jul 2022 05:17:54 +0800
> Schspa Shi <[email protected]> wrote:
>
>> +++ b/kernel/sched/rt.c
>> @@ -1998,11 +1998,14 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
>> * the mean time, task could have
>> * migrated already or had its affinity changed.
>> * Also make sure that it wasn't scheduled on its rq.
>> + * It is possible the task has running for a while,
>
> I don't understand the "running for a while" part. That doesn't make sense.
>

When I say "run for a while" I mean as long as the task has
run capability, we should check the migrate disabled flag again.

> The only way this can happen is that it was scheduled, set
> "migrate_disabled" and then got preempted where it's no longer on the run
> queue.

Yes, it is the only case.

> -- Steve
>
>
>> + * And we check task migration disable flag here.
>> */
>> if (unlikely(task_rq(task) != rq ||
>> !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_mask) ||
>> task_running(rq, task) ||
>> !rt_task(task) ||
>> + is_migration_disabled(task) ||
>> !task_on_rq_queued(task))) {
>>
>> double_unlock_balance(rq, lowest_rq);

--
BRs
Schspa Shi

2022-07-11 20:15:34

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks

On Sat, 09 Jul 2022 05:32:25 +0800
Schspa Shi <[email protected]> wrote:

> >> +++ b/kernel/sched/rt.c
> >> @@ -1998,11 +1998,14 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
> >> * the mean time, task could have
> >> * migrated already or had its affinity changed.
> >> * Also make sure that it wasn't scheduled on its rq.
> >> + * It is possible the task has running for a while,
> >
> > I don't understand the "running for a while" part. That doesn't make sense.
> >
>
> When I say "run for a while" I mean as long as the task has
> run capability, we should check the migrate disabled flag again.
>
> > The only way this can happen is that it was scheduled, set
> > "migrate_disabled" and then got preempted where it's no longer on the run
> > queue.
>
> Yes, it is the only case.

Can we then change the comment, as the "running for a while" is not clear
to what the issue is, and honestly, sounds misleading.

-- Steve

2022-07-12 01:33:13

by Schspa Shi

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks


Steven Rostedt <[email protected]> writes:

> On Sat, 09 Jul 2022 05:32:25 +0800
> Schspa Shi <[email protected]> wrote:
>
>> >> +++ b/kernel/sched/rt.c
>> >> @@ -1998,11 +1998,14 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
>> >> * the mean time, task could have
>> >> * migrated already or had its affinity changed.
>> >> * Also make sure that it wasn't scheduled on its rq.
>> >> + * It is possible the task has running for a while,
>> >
>> > I don't understand the "running for a while" part. That doesn't make sense.
>> >
>>
>> When I say "run for a while" I mean as long as the task has
>> run capability, we should check the migrate disabled flag again.
>>
>> > The only way this can happen is that it was scheduled, set
>> > "migrate_disabled" and then got preempted where it's no longer on the run
>> > queue.
>>
>> Yes, it is the only case.
>
> Can we then change the comment, as the "running for a while" is not clear
> to what the issue is, and honestly, sounds misleading.
>
> -- Steve

How about to change this to

/*
* We had to unlock the run queue. In
* the mean time, task could have
* migrated already or had its affinity changed.
* Also make sure that it wasn't scheduled on its rq.
* It is possible the task was scheduled, set
* "migrate_disabled" and then got preempted, And we
* check task migration disable flag here too.
*/

--
BRs
Schspa Shi

2022-07-12 01:36:42

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks

On Tue, 12 Jul 2022 08:53:56 +0800
Schspa Shi <[email protected]> wrote:

> How about to change this to
>
> /*
> * We had to unlock the run queue. In
> * the mean time, task could have
> * migrated already or had its affinity changed.
> * Also make sure that it wasn't scheduled on its rq.
> * It is possible the task was scheduled, set
> * "migrate_disabled" and then got preempted, And we
> * check task migration disable flag here too.
> */

That's better. But of course it needs better formatting ;-)

-- Steve

2022-07-12 01:39:40

by Schspa Shi

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] sched/rt: fix bad task migration for rt tasks


Steven Rostedt <[email protected]> writes:

> On Tue, 12 Jul 2022 08:53:56 +0800
> Schspa Shi <[email protected]> wrote:
>
>> How about to change this to
>>
>> /*
>> * We had to unlock the run queue. In
>> * the mean time, task could have
>> * migrated already or had its affinity changed.
>> * Also make sure that it wasn't scheduled on its rq.
>> * It is possible the task was scheduled, set
>> * "migrate_disabled" and then got preempted, And we
>> * check task migration disable flag here too.
>> */
>
> That's better. But of course it needs better formatting ;-)
>
> -- Steve

I have upload a v5 patch for this comment change, please review it. ;-)
Link: https://lore.kernel.org/all/[email protected]/T/#t

--
BRs
Schspa Shi