Since the recent remote cpufreq callback work, its possible that a cpufreq
update is triggered from a remote CPU. For single policies however, the current
code uses the local CPU when trying to determine if the remote sg_cpu entered
idle or is busy. This is incorrect. To remedy this, compare with the nohz tick
idle_calls counter of the remote CPU.
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Joel Fernandes <[email protected]>
---
include/linux/tick.h | 1 +
kernel/sched/cpufreq_schedutil.c | 2 +-
kernel/time/tick-sched.c | 13 +++++++++++++
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/include/linux/tick.h b/include/linux/tick.h
index f442d1a42025..7cc35921218e 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -119,6 +119,7 @@ extern void tick_nohz_idle_exit(void);
extern void tick_nohz_irq_exit(void);
extern ktime_t tick_nohz_get_sleep_length(void);
extern unsigned long tick_nohz_get_idle_calls(void);
+extern unsigned long tick_nohz_get_idle_calls_cpu(int cpu);
extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
#else /* !CONFIG_NO_HZ_COMMON */
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 2f52ec0f1539..d6717a3331a1 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -244,7 +244,7 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, unsigned long *util,
#ifdef CONFIG_NO_HZ_COMMON
static bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu)
{
- unsigned long idle_calls = tick_nohz_get_idle_calls();
+ unsigned long idle_calls = tick_nohz_get_idle_calls_cpu(sg_cpu->cpu);
bool ret = idle_calls == sg_cpu->saved_idle_calls;
sg_cpu->saved_idle_calls = idle_calls;
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 99578f06c8d4..77555faf6fbc 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -985,6 +985,19 @@ ktime_t tick_nohz_get_sleep_length(void)
return ts->sleep_length;
}
+/**
+ * tick_nohz_get_idle_calls_cpu - return the current idle calls counter value
+ * for a particular CPU.
+ *
+ * Called from the schedutil frequency scaling governor in scheduler context.
+ */
+unsigned long tick_nohz_get_idle_calls_cpu(int cpu)
+{
+ struct tick_sched *ts = tick_get_tick_sched(cpu);
+
+ return ts->idle_calls;
+}
+
/**
* tick_nohz_get_idle_calls - return the current idle calls counter value
*
--
2.15.1.504.g5279b80103-goog
Since the remote cpufreq callback work, the cpufreq_update_util call can happen
from remote CPUs. The comment about local CPUs is thus obsolete. Update it
accordingly.
Reviewed-by: Viresh Kumar <[email protected]>
Signed-off-by: Joel Fernandes <[email protected]>
---
kernel/sched/fair.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2fe3aa853e4d..1b10821d8380 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3020,9 +3020,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq)
/*
* There are a few boundary cases this might miss but it should
* get called often enough that that should (hopefully) not be
- * a real problem -- added to that it only calls on the local
- * CPU, so if we enqueue remotely we'll miss an update, but
- * the next tick/schedule should update.
+ * a real problem.
*
* It will not get called when we go idle, because the idle
* thread is a different class (!fair), nor will the utilization
--
2.15.1.504.g5279b80103-goog
find_idlest_group_cpu goes through CPUs of a group previous selected by
find_idlest_group. find_idlest_group returns NULL if the local group is the
selected one and doesn't execute find_idlest_group_cpu if the group to which
'cpu' belongs to is chosen. So we're always guaranteed to call
find_idlest_group_cpu with a group to which cpu is non-local. This makes one of
the conditions in find_idlest_group_cpu an impossible one, which we can get rid
off.
Reviewed-by: Brendan Jackman <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Signed-off-by: Joel Fernandes <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1b10821d8380..44407e703b5f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5948,7 +5948,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
}
} else if (shallowest_idle_cpu == -1) {
load = weighted_cpuload(cpu_rq(i));
- if (load < min_load || (load == min_load && i == this_cpu)) {
+ if (load < min_load) {
min_load = load;
least_loaded_cpu = i;
}
--
2.15.1.504.g5279b80103-goog
On Fri, Dec 15, 2017 at 07:39:43AM -0800, Joel Fernandes wrote:
> Since the remote cpufreq callback work, the cpufreq_update_util call can happen
> from remote CPUs. The comment about local CPUs is thus obsolete. Update it
> accordingly.
>
> Reviewed-by: Viresh Kumar <[email protected]>
> Signed-off-by: Joel Fernandes <[email protected]>
Sure, ACK
> ---
> kernel/sched/fair.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2fe3aa853e4d..1b10821d8380 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3020,9 +3020,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq)
> /*
> * There are a few boundary cases this might miss but it should
> * get called often enough that that should (hopefully) not be
> - * a real problem -- added to that it only calls on the local
> - * CPU, so if we enqueue remotely we'll miss an update, but
> - * the next tick/schedule should update.
> + * a real problem.
> *
> * It will not get called when we go idle, because the idle
> * thread is a different class (!fair), nor will the utilization
> --
> 2.15.1.504.g5279b80103-goog
>
Commit-ID: 18cec7e0ddd5e28b7722f7049d715873373be3e9
Gitweb: https://git.kernel.org/tip/18cec7e0ddd5e28b7722f7049d715873373be3e9
Author: Joel Fernandes <[email protected]>
AuthorDate: Fri, 15 Dec 2017 07:39:44 -0800
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 10 Jan 2018 11:30:30 +0100
sched/fair: Remove impossible condition from find_idlest_group_cpu()
find_idlest_group_cpu() goes through CPUs of a group previous selected by
find_idlest_group(). find_idlest_group() returns NULL if the local group is the
selected one and doesn't execute find_idlest_group_cpu if the group to which
'cpu' belongs to is chosen. So we're always guaranteed to call
find_idlest_group_cpu() with a group to which 'cpu' is non-local.
This makes one of the conditions in find_idlest_group_cpu() an impossible one,
which we can get rid off.
Signed-off-by: Joel Fernandes <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Brendan Jackman <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Cc: Android Kernel <[email protected]>
Cc: Atish Patra <[email protected]>
Cc: Chris Redpath <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: EAS Dev <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Josef Bacik <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Morten Ramussen <[email protected]>
Cc: Patrick Bellasi <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Cc: Rohit Jain <[email protected]>
Cc: Saravana Kannan <[email protected]>
Cc: Srinivas Pandruvada <[email protected]>
Cc: Steve Muckle <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vikram Mulukutla <[email protected]>
Cc: Viresh Kumar <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6e775ac..3e7606d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5950,7 +5950,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
}
} else if (shallowest_idle_cpu == -1) {
load = weighted_cpuload(cpu_rq(i));
- if (load < min_load || (load == min_load && i == this_cpu)) {
+ if (load < min_load) {
min_load = load;
least_loaded_cpu = i;
}
Commit-ID: 9783be2c0e90bbaceec3c471c4fb017bff7293ba
Gitweb: https://git.kernel.org/tip/9783be2c0e90bbaceec3c471c4fb017bff7293ba
Author: Joel Fernandes <[email protected]>
AuthorDate: Fri, 15 Dec 2017 07:39:43 -0800
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 10 Jan 2018 11:30:30 +0100
sched/fair: Correct obsolete comment about cpufreq_update_util()
Since the remote cpufreq callback work, the cpufreq_update_util() call can happen
from remote CPUs. The comment about local CPUs is thus obsolete. Update it
accordingly.
Signed-off-by: Joel Fernandes <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Viresh Kumar <[email protected]>
Cc: Android Kernel <[email protected]>
Cc: Atish Patra <[email protected]>
Cc: Chris Redpath <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: EAS Dev <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Josef Bacik <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Morten Ramussen <[email protected]>
Cc: Patrick Bellasi <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Cc: Rohit Jain <[email protected]>
Cc: Saravana Kannan <[email protected]>
Cc: Srinivas Pandruvada <[email protected]>
Cc: Steve Muckle <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vikram Mulukutla <[email protected]>
Cc: Vincent Guittot <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3e7606d..59e66a5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3020,9 +3020,7 @@ static inline void cfs_rq_util_change(struct cfs_rq *cfs_rq)
/*
* There are a few boundary cases this might miss but it should
* get called often enough that that should (hopefully) not be
- * a real problem -- added to that it only calls on the local
- * CPU, so if we enqueue remotely we'll miss an update, but
- * the next tick/schedule should update.
+ * a real problem.
*
* It will not get called when we go idle, because the idle
* thread is a different class (!fair), nor will the utilization