From: Zhaoyang Huang <[email protected]>
As RT, DL, IRQ time could be deemed as lost time of CFS's task, some
timing value want to know the distribution of how these spread
approximately by using utilization account value (nivcsw is not enough
sometimes), wheras, cpu_util_cfs is not visible out side of
kernel/sched, This commit would like to make it be visible.
eg.
Effective part of A = Total_time * cpu_util_cfs / sched_cpu_util
Task's Timing value A
Timing start
|
|
preempted by RT, DL or IRQ
|\
| This period time is nonvoluntary CPU give up, need to know how long
|/
sched in again
|
|
|
Timing end
Signed-off-by: Zhaoyang Huang <[email protected]>
---
include/linux/sched.h | 1 +
kernel/sched/sched.h | 1 -
2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 77f01ac385f7..56953626526f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2318,6 +2318,7 @@ static inline bool owner_on_cpu(struct task_struct *owner)
/* Returns effective CPU energy utilization, as seen by the scheduler */
unsigned long sched_cpu_util(int cpu);
+unsigned long cpu_util_cfs(int cpu);
#endif /* CONFIG_SMP */
#ifdef CONFIG_RSEQ
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 04846272409c..46110409e0f3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3027,7 +3027,6 @@ static inline unsigned long cpu_util_dl(struct rq *rq)
}
-extern unsigned long cpu_util_cfs(int cpu);
extern unsigned long cpu_util_cfs_boost(int cpu);
static inline unsigned long cpu_util_rt(struct rq *rq)
--
2.25.1
On Sun, 11 Feb 2024 at 08:50, zhaoyang.huang <[email protected]> wrote:
>
> From: Zhaoyang Huang <[email protected]>
>
> As RT, DL, IRQ time could be deemed as lost time of CFS's task, some
> timing value want to know the distribution of how these spread
> approximately by using utilization account value (nivcsw is not enough
> sometimes), wheras, cpu_util_cfs is not visible out side of
> kernel/sched, This commit would like to make it be visible.
We expect a user of this to be sent as part of the patchset
>
> eg.
> Effective part of A = Total_time * cpu_util_cfs / sched_cpu_util
>
> Task's Timing value A
> Timing start
> |
> |
> preempted by RT, DL or IRQ
> |\
> | This period time is nonvoluntary CPU give up, need to know how long
> |/
> sched in again
> |
> |
> |
> Timing end
You have to use *_avg with care if you want to get such figures
because they do not only reflect the last task activation but an
average of the past dozens of ms so you can easily get wrong figures.
>
> Signed-off-by: Zhaoyang Huang <[email protected]>
> ---
> include/linux/sched.h | 1 +
> kernel/sched/sched.h | 1 -
> 2 files changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 77f01ac385f7..56953626526f 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -2318,6 +2318,7 @@ static inline bool owner_on_cpu(struct task_struct *owner)
>
> /* Returns effective CPU energy utilization, as seen by the scheduler */
> unsigned long sched_cpu_util(int cpu);
> +unsigned long cpu_util_cfs(int cpu);
> #endif /* CONFIG_SMP */
>
> #ifdef CONFIG_RSEQ
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 04846272409c..46110409e0f3 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3027,7 +3027,6 @@ static inline unsigned long cpu_util_dl(struct rq *rq)
> }
>
>
> -extern unsigned long cpu_util_cfs(int cpu);
> extern unsigned long cpu_util_cfs_boost(int cpu);
>
> static inline unsigned long cpu_util_rt(struct rq *rq)
> --
> 2.25.1
>