2022-04-25 14:48:38

by 王擎

[permalink] [raw]
Subject: [PATCH V2] sched: Introduce util_est boost

From: Wang Qing <[email protected]>

Util_avg is greater than util_est means there is a sudden increase in
tasks at this time, we should give it an increment to make load balancing
faster.

Signed-off-by: Wang Qing <[email protected]>
---
v2:
- modify the return value if UTIL_EST_BOOST is false
---
kernel/sched/fair.c | 8 +++++++-
kernel/sched/features.h | 1 +
2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 265bf7a75a37..2fcda7972057 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4036,7 +4036,13 @@ static inline unsigned long _task_util_est(struct task_struct *p)

static inline unsigned long task_util_est(struct task_struct *p)
{
- return max(task_util(p), _task_util_est(p));
+ unsigned long util_avg = task_util(p);
+ unsigned long util_est = _task_util_est(p);
+
+ if (sched_feat(UTIL_EST_BOOST) && util_est && util_avg > util_est)
+ return util_avg + (util_avg - util_est)/2;
+ else
+ return max(util_avg, util_est);
}

#ifdef CONFIG_UCLAMP_TASK
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 1cf435bbcd9c..c73a898e7e38 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -95,6 +95,7 @@ SCHED_FEAT(WA_BIAS, true)
*/
SCHED_FEAT(UTIL_EST, true)
SCHED_FEAT(UTIL_EST_FASTUP, true)
+SCHED_FEAT(UTIL_EST_BOOST, false)

SCHED_FEAT(LATENCY_WARN, false)

--
2.27.0.windows.1


2022-05-08 14:01:06

by 王擎

[permalink] [raw]
Subject: [PATCH V2] sched: Introduce util_est boost


This patch is the simple implementation part of
"Improving responsiveness of interactive CFS tasks using util_est" by
Vincent.

I would like to ask for your comments.

Thanks,
Qing


>From: Wang Qing <[email protected]>
>
>Util_avg is greater than util_est means there is a sudden increase in
>tasks at this time, we should give it an increment to make load balancing
>faster.
>
>Signed-off-by: Wang Qing <[email protected]>
>---
>v2:
>- modify the return value if UTIL_EST_BOOST is false
>---
> kernel/sched/fair.c     | 8 +++++++-
> kernel/sched/features.h | 1 +
> 2 files changed, 8 insertions(+), 1 deletion(-)
>
>diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>index 265bf7a75a37..2fcda7972057 100644
>--- a/kernel/sched/fair.c
>+++ b/kernel/sched/fair.c
>@@ -4036,7 +4036,13 @@ static inline unsigned long _task_util_est(struct task_struct *p)

> static inline unsigned long task_util_est(struct task_struct *p)
> {
>-       return max(task_util(p), _task_util_est(p));
>+       unsigned long util_avg = task_util(p);
>+       unsigned long util_est = _task_util_est(p);
>+
>+       if (sched_feat(UTIL_EST_BOOST) && util_est && util_avg > util_est)
>+               return util_avg + (util_avg - util_est)/2;
>+       else
>+               return max(util_avg, util_est);
> }

> #ifdef CONFIG_UCLAMP_TASK
>diff --git a/kernel/sched/features.h b/kernel/sched/features.h
>index 1cf435bbcd9c..c73a898e7e38 100644
>--- a/kernel/sched/features.h
>+++ b/kernel/sched/features.h
>@@ -95,6 +95,7 @@ SCHED_FEAT(WA_BIAS, true)
>  */
> SCHED_FEAT(UTIL_EST, true)
> SCHED_FEAT(UTIL_EST_FASTUP, true)
>+SCHED_FEAT(UTIL_EST_BOOST, false)

> SCHED_FEAT(LATENCY_WARN, false)

>--
>2.7.4