2022-08-27 01:36:41

by Shang XiaoJing

[permalink] [raw]
Subject: [PATCH -next] sched/deadline: Save processing meaningless ops in dl_task_offline_migration

Task's bw will be sub from old rd, and add to new rd, even though
find_lock_later_rq returns a new rq that belong to the same rd with old
rq. Save ops for moving task's bw if rd is not changed.

Signed-off-by: Shang XiaoJing <[email protected]>
---
kernel/sched/deadline.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3bf4b12ec5b7..58ca9aaa9c44 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -714,20 +714,22 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p
add_rq_bw(&p->dl, &later_rq->dl);
}

- /*
- * And we finally need to fixup root_domain(s) bandwidth accounting,
- * since p is still hanging out in the old (now moved to default) root
- * domain.
- */
- dl_b = &rq->rd->dl_bw;
- raw_spin_lock(&dl_b->lock);
- __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
- raw_spin_unlock(&dl_b->lock);
+ if (&rq->rd != &later_rq->rd) {
+ /*
+ * And we finally need to fixup root_domain(s) bandwidth accounting,
+ * since p is still hanging out in the old (now moved to default) root
+ * domain.
+ */
+ dl_b = &rq->rd->dl_bw;
+ raw_spin_lock(&dl_b->lock);
+ __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
+ raw_spin_unlock(&dl_b->lock);

- dl_b = &later_rq->rd->dl_bw;
- raw_spin_lock(&dl_b->lock);
- __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
- raw_spin_unlock(&dl_b->lock);
+ dl_b = &later_rq->rd->dl_bw;
+ raw_spin_lock(&dl_b->lock);
+ __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
+ raw_spin_unlock(&dl_b->lock);
+ }

set_task_cpu(p, later_rq->cpu);
double_unlock_balance(later_rq, rq);
--
2.17.1


Subject: Re: [PATCH -next] sched/deadline: Save processing meaningless ops in dl_task_offline_migration

On 8/27/22 04:04, Shang XiaoJing wrote:
> Task's bw will be sub from old rd, and add to new rd, even though
> find_lock_later_rq returns a new rq that belong to the same rd with old
> rq. Save ops for moving task's bw if rd is not changed.

This subject is not good. Please change it to a "meaningful" one, describing the
change, not its consequence.

-- Daniel
> Signed-off-by: Shang XiaoJing <[email protected]>
> ---
> kernel/sched/deadline.c | 28 +++++++++++++++-------------
> 1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 3bf4b12ec5b7..58ca9aaa9c44 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -714,20 +714,22 @@ static struct rq *dl_task_offline_migration(struct rq *rq, struct task_struct *p
> add_rq_bw(&p->dl, &later_rq->dl);
> }
>
> - /*
> - * And we finally need to fixup root_domain(s) bandwidth accounting,
> - * since p is still hanging out in the old (now moved to default) root
> - * domain.
> - */
> - dl_b = &rq->rd->dl_bw;
> - raw_spin_lock(&dl_b->lock);
> - __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
> - raw_spin_unlock(&dl_b->lock);
> + if (&rq->rd != &later_rq->rd) {
> + /*
> + * And we finally need to fixup root_domain(s) bandwidth accounting,
> + * since p is still hanging out in the old (now moved to default) root
> + * domain.
> + */
> + dl_b = &rq->rd->dl_bw;
> + raw_spin_lock(&dl_b->lock);
> + __dl_sub(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
> + raw_spin_unlock(&dl_b->lock);
>
> - dl_b = &later_rq->rd->dl_bw;
> - raw_spin_lock(&dl_b->lock);
> - __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
> - raw_spin_unlock(&dl_b->lock);
> + dl_b = &later_rq->rd->dl_bw;
> + raw_spin_lock(&dl_b->lock);
> + __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(later_rq->rd->span));
> + raw_spin_unlock(&dl_b->lock);
> + }
>
> set_task_cpu(p, later_rq->cpu);
> double_unlock_balance(later_rq, rq);