From: Vincent Wang <[email protected]>
When a task that is in_iowait state is enqueued, cpufreq_update_util() will
be invoked with SCHED_CPUFREQ_IOWAIT flag. In this case,the value of util
and max, which are parameters used in get_next_freq(), will be cpu
frequency, instead of cpu util or capactiy.
For some 32bit architectures, the size of unsigned long is 32. When
calculating freq, there may be an overflow error in this expression:
freq = (freq + (freq >> 2)) * util / max;
This patch will fix this overflow risk by calulating util / max firstly,
whether they be frequency or util.
Signed-off-by: Vincent Wang <[email protected]>
Signed-off-by: Chunyan Zhang <[email protected]>
---
kernel/sched/cpufreq_schedutil.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 3fffad3..7c372db1 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -166,8 +166,9 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
struct cpufreq_policy *policy = sg_policy->policy;
unsigned int freq = arch_scale_freq_invariant() ?
policy->cpuinfo.max_freq : policy->cur;
+ unsigned int ratio = util * 100 / max;
- freq = (freq + (freq >> 2)) * util / max;
+ freq = (freq + (freq >> 2)) * ratio / 100;
if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
return sg_policy->next_freq;
--
2.7.4
Hi,
On Monday 24 Dec 2018 at 14:59:07 (+0800), Chunyan Zhang wrote:
> From: Vincent Wang <[email protected]>
>
> When a task that is in_iowait state is enqueued, cpufreq_update_util() will
> be invoked with SCHED_CPUFREQ_IOWAIT flag. In this case,the value of util
> and max, which are parameters used in get_next_freq(), will be cpu
> frequency, instead of cpu util or capactiy.
>
> For some 32bit architectures, the size of unsigned long is 32. When
> calculating freq, there may be an overflow error in this expression:
>
> freq = (freq + (freq >> 2)) * util / max;
This has moved to a different file recently -- see 938e5e4b0d15
("sched/cpufreq: Prepare schedutil for Energy Aware Scheduling") so you
probably want to rebase this patch :-)
>
> This patch will fix this overflow risk by calulating util / max firstly,
> whether they be frequency or util.
>
> Signed-off-by: Vincent Wang <[email protected]>
> Signed-off-by: Chunyan Zhang <[email protected]>
> ---
> kernel/sched/cpufreq_schedutil.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 3fffad3..7c372db1 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -166,8 +166,9 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> struct cpufreq_policy *policy = sg_policy->policy;
> unsigned int freq = arch_scale_freq_invariant() ?
> policy->cpuinfo.max_freq : policy->cur;
> + unsigned int ratio = util * 100 / max;
>
> - freq = (freq + (freq >> 2)) * util / max;
> + freq = (freq + (freq >> 2)) * ratio / 100;
Maybe shift by SCHED_CAPACITY_SCALE instead of doing divisions ? Or you
could also just keep the original structure and use proper u64 types
instead of adding (useless) extra steps for 64 bits.
Thanks,
Quentin