2012-11-26 16:40:29

by Fabio Baltieri

[permalink] [raw]
Subject: [PATCH v5 0/5] cpufreq: handle SW coordinated CPUs

Hello Rafael,

this patchset is a new version of the cpufreq SW coordinated CPU bug fix.
That's basically the v4 rebased on linux-pm's linux-next tree, split in 5
patches for readability and with the bug fixed also in the "conservative"
governor.

Regards,
Fabio


Changes:
v5
- rebased/reimplemented on linux-next
- split hotplug code on a separate patch
- split ondemand/conservative specific fixes on separate patches
v4
- moved update_sampling rate code on separate patch
- simplified dbs_sw_coordinated_cpus
- reworked timer handling for code readability, now:
* do_dbs_timer() [-> dbs_timer_coordinated()] -> dbs_timer_update()
- simplified cpu_callback cases
v3
- original submission


Fabio Baltieri (4):
cpufreq: star/stop cpufreq timers on cpu hotplug
cpufreq: ondemand: call dbs_check_cpu only when necessary
cpufreq: conservative: call dbs_check_cpu only when necessary
cpufreq: ondemand: use all CPUs in update_sampling_rate

Rickard Andersson (1):
cpufreq: handle SW coordinated CPUs

drivers/cpufreq/cpufreq_conservative.c | 46 ++++++++++++--
drivers/cpufreq/cpufreq_governor.c | 106 +++++++++++++++++++++++++++++++--
drivers/cpufreq/cpufreq_governor.h | 2 +
drivers/cpufreq/cpufreq_ondemand.c | 62 +++++++++++++++----
4 files changed, 193 insertions(+), 23 deletions(-)

--
1.7.12.1


2012-11-26 16:40:40

by Fabio Baltieri

[permalink] [raw]
Subject: [PATCH 1/5] cpufreq: handle SW coordinated CPUs

From: Rickard Andersson <[email protected]>

This patch fixes a bug that occurred when we had load on a secondary CPU
and the primary CPU was sleeping. Only one sampling timer was spawned
and it was spawned as a deferred timer on the primary CPU, so when a
secondary CPU had a change in load this was not detected by the cpufreq
governor (both ondemand and conservative).

This patch make sure that deferred timers are run on all CPUs in the
case of software controlled CPUs that run on the same frequency.

Signed-off-by: Rickard Andersson <[email protected]>
Signed-off-by: Fabio Baltieri <[email protected]>
---
drivers/cpufreq/cpufreq_conservative.c | 3 +-
drivers/cpufreq/cpufreq_governor.c | 52 ++++++++++++++++++++++++++++++----
drivers/cpufreq/cpufreq_governor.h | 1 +
drivers/cpufreq/cpufreq_ondemand.c | 3 +-
4 files changed, 51 insertions(+), 8 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index 64ef737..b9d7f14 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -122,7 +122,8 @@ static void cs_dbs_timer(struct work_struct *work)

dbs_check_cpu(&cs_dbs_data, cpu);

- schedule_delayed_work_on(cpu, &dbs_info->cdbs.work, delay);
+ schedule_delayed_work_on(smp_processor_id(), &dbs_info->cdbs.work,
+ delay);
mutex_unlock(&dbs_info->cdbs.timer_mutex);
}

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 6c5f1d3..a00f02d 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -161,13 +161,31 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu)
}
EXPORT_SYMBOL_GPL(dbs_check_cpu);

+bool dbs_sw_coordinated_cpus(struct cpu_dbs_common_info *cdbs)
+{
+ struct cpufreq_policy *policy = cdbs->cur_policy;
+
+ return cpumask_weight(policy->cpus) > 1;
+}
+EXPORT_SYMBOL_GPL(dbs_sw_coordinated_cpus);
+
static inline void dbs_timer_init(struct dbs_data *dbs_data,
- struct cpu_dbs_common_info *cdbs, unsigned int sampling_rate)
+ struct cpu_dbs_common_info *cdbs,
+ unsigned int sampling_rate,
+ int cpu)
{
int delay = delay_for_sampling_rate(sampling_rate);
+ struct cpu_dbs_common_info *cdbs_local = dbs_data->get_cpu_cdbs(cpu);
+ struct od_cpu_dbs_info_s *od_dbs_info;
+
+ cancel_delayed_work_sync(&cdbs_local->work);
+
+ if (dbs_data->governor == GOV_ONDEMAND) {
+ od_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu);
+ od_dbs_info->sample_type = OD_NORMAL_SAMPLE;
+ }

- INIT_DEFERRABLE_WORK(&cdbs->work, dbs_data->gov_dbs_timer);
- schedule_delayed_work_on(cdbs->cpu, &cdbs->work, delay);
+ schedule_delayed_work_on(cpu, &cdbs_local->work, delay);
}

static inline void dbs_timer_exit(struct cpu_dbs_common_info *cdbs)
@@ -217,6 +235,10 @@ int cpufreq_governor_dbs(struct dbs_data *dbs_data,
if (ignore_nice)
j_cdbs->prev_cpu_nice =
kcpustat_cpu(j).cpustat[CPUTIME_NICE];
+
+ mutex_init(&j_cdbs->timer_mutex);
+ INIT_DEFERRABLE_WORK(&j_cdbs->work,
+ dbs_data->gov_dbs_timer);
}

/*
@@ -275,15 +297,33 @@ second_time:
}
mutex_unlock(&dbs_data->mutex);

- mutex_init(&cpu_cdbs->timer_mutex);
- dbs_timer_init(dbs_data, cpu_cdbs, *sampling_rate);
+ if (dbs_sw_coordinated_cpus(cpu_cdbs)) {
+ for_each_cpu(j, policy->cpus) {
+ struct cpu_dbs_common_info *j_cdbs;
+
+ j_cdbs = dbs_data->get_cpu_cdbs(j);
+ dbs_timer_init(dbs_data, j_cdbs,
+ *sampling_rate, j);
+ }
+ } else {
+ dbs_timer_init(dbs_data, cpu_cdbs, *sampling_rate, cpu);
+ }
break;

case CPUFREQ_GOV_STOP:
if (dbs_data->governor == GOV_CONSERVATIVE)
cs_dbs_info->enable = 0;

- dbs_timer_exit(cpu_cdbs);
+ if (dbs_sw_coordinated_cpus(cpu_cdbs)) {
+ for_each_cpu(j, policy->cpus) {
+ struct cpu_dbs_common_info *j_cdbs;
+
+ j_cdbs = dbs_data->get_cpu_cdbs(j);
+ dbs_timer_exit(j_cdbs);
+ }
+ } else {
+ dbs_timer_exit(cpu_cdbs);
+ }

mutex_lock(&dbs_data->mutex);
mutex_destroy(&cpu_cdbs->timer_mutex);
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index f661654..5bf6fb8 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -171,6 +171,7 @@ static inline int delay_for_sampling_rate(unsigned int sampling_rate)

u64 get_cpu_idle_time(unsigned int cpu, u64 *wall);
void dbs_check_cpu(struct dbs_data *dbs_data, int cpu);
+bool dbs_sw_coordinated_cpus(struct cpu_dbs_common_info *cdbs);
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event);
#endif /* _CPUFREQ_GOVERNER_H */
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index cca3e9f..fe6e47c 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -239,7 +239,8 @@ static void od_dbs_timer(struct work_struct *work)
}
}

- schedule_delayed_work_on(cpu, &dbs_info->cdbs.work, delay);
+ schedule_delayed_work_on(smp_processor_id(), &dbs_info->cdbs.work,
+ delay);
mutex_unlock(&dbs_info->cdbs.timer_mutex);
}

--
1.7.12.1

2012-11-26 16:40:51

by Fabio Baltieri

[permalink] [raw]
Subject: [PATCH 2/5] cpufreq: star/stop cpufreq timers on cpu hotplug

Add a CPU notifier to start and stop individual core timers on CPU
hotplug events when running on CPUs with SW coordinated frequency.

Signed-off-by: Fabio Baltieri <[email protected]>
---
drivers/cpufreq/cpufreq_governor.c | 51 ++++++++++++++++++++++++++++++++++++++
1 file changed, 51 insertions(+)

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index a00f02d..b7c1f89 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -25,9 +25,12 @@
#include <linux/tick.h>
#include <linux/types.h>
#include <linux/workqueue.h>
+#include <linux/cpu.h>

#include "cpufreq_governor.h"

+static DEFINE_PER_CPU(struct dbs_data *, cpu_cur_dbs);
+
static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall)
{
u64 idle_time;
@@ -193,6 +196,46 @@ static inline void dbs_timer_exit(struct cpu_dbs_common_info *cdbs)
cancel_delayed_work_sync(&cdbs->work);
}

+static int __cpuinit cpu_callback(struct notifier_block *nfb,
+ unsigned long action, void *hcpu)
+{
+ unsigned int cpu = (unsigned long)hcpu;
+ struct device *cpu_dev = get_cpu_device(cpu);
+ struct dbs_data *dbs_data = per_cpu(cpu_cur_dbs, cpu);
+ struct cpu_dbs_common_info *cpu_cdbs = dbs_data->get_cpu_cdbs(cpu);
+ unsigned int sampling_rate;
+
+ if (dbs_data->governor == GOV_CONSERVATIVE) {
+ struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
+ sampling_rate = cs_tuners->sampling_rate;
+ } else {
+ struct od_dbs_tuners *od_tuners = dbs_data->tuners;
+ sampling_rate = od_tuners->sampling_rate;
+ }
+
+ if (cpu_dev) {
+ switch (action) {
+ case CPU_ONLINE:
+ case CPU_ONLINE_FROZEN:
+ case CPU_DOWN_FAILED:
+ case CPU_DOWN_FAILED_FROZEN:
+ dbs_timer_init(dbs_data, cpu_cdbs,
+ sampling_rate, cpu);
+ break;
+ case CPU_DOWN_PREPARE:
+ case CPU_DOWN_PREPARE_FROZEN:
+ dbs_timer_exit(cpu_cdbs);
+ break;
+ }
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block __refdata ondemand_cpu_notifier = {
+ .notifier_call = cpu_callback,
+};
+
int cpufreq_governor_dbs(struct dbs_data *dbs_data,
struct cpufreq_policy *policy, unsigned int event)
{
@@ -304,7 +347,11 @@ second_time:
j_cdbs = dbs_data->get_cpu_cdbs(j);
dbs_timer_init(dbs_data, j_cdbs,
*sampling_rate, j);
+
+ per_cpu(cpu_cur_dbs, j) = dbs_data;
}
+
+ register_hotcpu_notifier(&ondemand_cpu_notifier);
} else {
dbs_timer_init(dbs_data, cpu_cdbs, *sampling_rate, cpu);
}
@@ -315,11 +362,15 @@ second_time:
cs_dbs_info->enable = 0;

if (dbs_sw_coordinated_cpus(cpu_cdbs)) {
+ unregister_hotcpu_notifier(&ondemand_cpu_notifier);
+
for_each_cpu(j, policy->cpus) {
struct cpu_dbs_common_info *j_cdbs;

j_cdbs = dbs_data->get_cpu_cdbs(j);
dbs_timer_exit(j_cdbs);
+
+ per_cpu(cpu_cur_dbs, j) = NULL;
}
} else {
dbs_timer_exit(cpu_cdbs);
--
1.7.12.1

2012-11-26 16:41:00

by Fabio Baltieri

[permalink] [raw]
Subject: [PATCH 3/5] cpufreq: ondemand: call dbs_check_cpu only when necessary

Modify ondemand timer to not resample CPU utilization if recently
sampled from another SW coordinated core.

Signed-off-by: Fabio Baltieri <[email protected]>
---
drivers/cpufreq/cpufreq_governor.c | 3 ++
drivers/cpufreq/cpufreq_governor.h | 1 +
drivers/cpufreq/cpufreq_ondemand.c | 58 +++++++++++++++++++++++++++++++-------
3 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index b7c1f89..7e322e5 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -341,6 +341,9 @@ second_time:
mutex_unlock(&dbs_data->mutex);

if (dbs_sw_coordinated_cpus(cpu_cdbs)) {
+ /* Initiate timer time stamp */
+ cpu_cdbs->time_stamp = ktime_get();
+
for_each_cpu(j, policy->cpus) {
struct cpu_dbs_common_info *j_cdbs;

diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index 5bf6fb8..aaf073d 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -82,6 +82,7 @@ struct cpu_dbs_common_info {
* the governor or limits.
*/
struct mutex timer_mutex;
+ ktime_t time_stamp;
};

struct od_cpu_dbs_info_s {
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index fe6e47c..e07593e 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -212,23 +212,23 @@ static void od_check_cpu(int cpu, unsigned int load_freq)
}
}

-static void od_dbs_timer(struct work_struct *work)
+static void od_timer_update(struct od_cpu_dbs_info_s *dbs_info, bool sample,
+ struct delayed_work *dw)
{
- struct od_cpu_dbs_info_s *dbs_info =
- container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cpu;
int delay, sample_type = dbs_info->sample_type;

- mutex_lock(&dbs_info->cdbs.timer_mutex);
-
/* Common NORMAL_SAMPLE setup */
dbs_info->sample_type = OD_NORMAL_SAMPLE;
if (sample_type == OD_SUB_SAMPLE) {
delay = dbs_info->freq_lo_jiffies;
- __cpufreq_driver_target(dbs_info->cdbs.cur_policy,
- dbs_info->freq_lo, CPUFREQ_RELATION_H);
+ if (sample)
+ __cpufreq_driver_target(dbs_info->cdbs.cur_policy,
+ dbs_info->freq_lo,
+ CPUFREQ_RELATION_H);
} else {
- dbs_check_cpu(&od_dbs_data, cpu);
+ if (sample)
+ dbs_check_cpu(&od_dbs_data, cpu);
if (dbs_info->freq_lo) {
/* Setup timer for SUB_SAMPLE */
dbs_info->sample_type = OD_SUB_SAMPLE;
@@ -239,11 +239,49 @@ static void od_dbs_timer(struct work_struct *work)
}
}

- schedule_delayed_work_on(smp_processor_id(), &dbs_info->cdbs.work,
- delay);
+ schedule_delayed_work_on(smp_processor_id(), dw, delay);
+}
+
+static void od_timer_coordinated(struct od_cpu_dbs_info_s *dbs_info_local,
+ struct delayed_work *dw)
+{
+ struct od_cpu_dbs_info_s *dbs_info;
+ ktime_t time_now;
+ s64 delta_us;
+ bool sample = true;
+
+ /* use leader CPU's dbs_info */
+ dbs_info = &per_cpu(od_cpu_dbs_info, dbs_info_local->cdbs.cpu);
+ mutex_lock(&dbs_info->cdbs.timer_mutex);
+
+ time_now = ktime_get();
+ delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);
+
+ /* Do nothing if we recently have sampled */
+ if (delta_us < (s64)(od_tuners.sampling_rate / 2))
+ sample = false;
+ else
+ dbs_info->cdbs.time_stamp = time_now;
+
+ od_timer_update(dbs_info, sample, dw);
mutex_unlock(&dbs_info->cdbs.timer_mutex);
}

+static void od_dbs_timer(struct work_struct *work)
+{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct od_cpu_dbs_info_s *dbs_info =
+ container_of(work, struct od_cpu_dbs_info_s, cdbs.work.work);
+
+ if (dbs_sw_coordinated_cpus(&dbs_info->cdbs)) {
+ od_timer_coordinated(dbs_info, dw);
+ } else {
+ mutex_lock(&dbs_info->cdbs.timer_mutex);
+ od_timer_update(dbs_info, true, dw);
+ mutex_unlock(&dbs_info->cdbs.timer_mutex);
+ }
+}
+
/************************** sysfs interface ************************/

static ssize_t show_sampling_rate_min(struct kobject *kobj,
--
1.7.12.1

2012-11-26 16:41:10

by Fabio Baltieri

[permalink] [raw]
Subject: [PATCH 4/5] cpufreq: conservative: call dbs_check_cpu only when necessary

Modify conservative timer to not resample CPU utilization if recently
sampled from another SW coordinated core.

Signed-off-by: Fabio Baltieri <[email protected]>
---
drivers/cpufreq/cpufreq_conservative.c | 47 +++++++++++++++++++++++++++++-----
1 file changed, 41 insertions(+), 6 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index b9d7f14..5d8e894 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -111,22 +111,57 @@ static void cs_check_cpu(int cpu, unsigned int load)
}
}

-static void cs_dbs_timer(struct work_struct *work)
+static void cs_timer_update(struct cs_cpu_dbs_info_s *dbs_info, bool sample,
+ struct delayed_work *dw)
{
- struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
- struct cs_cpu_dbs_info_s, cdbs.work.work);
unsigned int cpu = dbs_info->cdbs.cpu;
int delay = delay_for_sampling_rate(cs_tuners.sampling_rate);

+ if (sample)
+ dbs_check_cpu(&cs_dbs_data, cpu);
+
+ schedule_delayed_work_on(smp_processor_id(), dw, delay);
+}
+
+static void cs_timer_coordinated(struct cs_cpu_dbs_info_s *dbs_info_local,
+ struct delayed_work *dw)
+{
+ struct cs_cpu_dbs_info_s *dbs_info;
+ ktime_t time_now;
+ s64 delta_us;
+ bool sample = true;
+
+ /* use leader CPU's dbs_info */
+ dbs_info = &per_cpu(cs_cpu_dbs_info, dbs_info_local->cdbs.cpu);
mutex_lock(&dbs_info->cdbs.timer_mutex);

- dbs_check_cpu(&cs_dbs_data, cpu);
+ time_now = ktime_get();
+ delta_us = ktime_us_delta(time_now, dbs_info->cdbs.time_stamp);

- schedule_delayed_work_on(smp_processor_id(), &dbs_info->cdbs.work,
- delay);
+ /* Do nothing if we recently have sampled */
+ if (delta_us < (s64)(cs_tuners.sampling_rate / 2))
+ sample = false;
+ else
+ dbs_info->cdbs.time_stamp = time_now;
+
+ cs_timer_update(dbs_info, sample, dw);
mutex_unlock(&dbs_info->cdbs.timer_mutex);
}

+static void cs_dbs_timer(struct work_struct *work)
+{
+ struct delayed_work *dw = to_delayed_work(work);
+ struct cs_cpu_dbs_info_s *dbs_info = container_of(work,
+ struct cs_cpu_dbs_info_s, cdbs.work.work);
+
+ if (dbs_sw_coordinated_cpus(&dbs_info->cdbs)) {
+ cs_timer_coordinated(dbs_info, dw);
+ } else {
+ mutex_lock(&dbs_info->cdbs.timer_mutex);
+ cs_timer_update(dbs_info, true, dw);
+ mutex_unlock(&dbs_info->cdbs.timer_mutex);
+ }
+}
static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
void *data)
{
--
1.7.12.1

2012-11-26 16:41:21

by Fabio Baltieri

[permalink] [raw]
Subject: [PATCH 5/5] cpufreq: ondemand: use all CPUs in update_sampling_rate

Modify update_sampling_rate() to check, and eventually immediately
schedule, all CPU's do_dbs_timer delayed work.

This is required in case of software coordinated CPUs, as we now have a
separate delayed work for each CPU.

Signed-off-by: Fabio Baltieri <[email protected]>
---
drivers/cpufreq/cpufreq_ondemand.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index e07593e..e34bb2b 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -318,7 +318,7 @@ static void update_sampling_rate(unsigned int new_rate)
policy = cpufreq_cpu_get(cpu);
if (!policy)
continue;
- dbs_info = &per_cpu(od_cpu_dbs_info, policy->cpu);
+ dbs_info = &per_cpu(od_cpu_dbs_info, cpu);
cpufreq_cpu_put(policy);

mutex_lock(&dbs_info->cdbs.timer_mutex);
@@ -337,8 +337,7 @@ static void update_sampling_rate(unsigned int new_rate)
cancel_delayed_work_sync(&dbs_info->cdbs.work);
mutex_lock(&dbs_info->cdbs.timer_mutex);

- schedule_delayed_work_on(dbs_info->cdbs.cpu,
- &dbs_info->cdbs.work,
+ schedule_delayed_work_on(cpu, &dbs_info->cdbs.work,
usecs_to_jiffies(new_rate));

}
--
1.7.12.1

2012-11-27 22:01:11

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 1/5] cpufreq: handle SW coordinated CPUs

On Monday, November 26, 2012 05:39:52 PM Fabio Baltieri wrote:
> From: Rickard Andersson <[email protected]>
>
> This patch fixes a bug that occurred when we had load on a secondary CPU
> and the primary CPU was sleeping. Only one sampling timer was spawned
> and it was spawned as a deferred timer on the primary CPU, so when a
> secondary CPU had a change in load this was not detected by the cpufreq
> governor (both ondemand and conservative).
>
> This patch make sure that deferred timers are run on all CPUs in the
> case of software controlled CPUs that run on the same frequency.
>
> Signed-off-by: Rickard Andersson <[email protected]>
> Signed-off-by: Fabio Baltieri <[email protected]>
> ---
> drivers/cpufreq/cpufreq_conservative.c | 3 +-
> drivers/cpufreq/cpufreq_governor.c | 52 ++++++++++++++++++++++++++++++----
> drivers/cpufreq/cpufreq_governor.h | 1 +
> drivers/cpufreq/cpufreq_ondemand.c | 3 +-
> 4 files changed, 51 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
> index 64ef737..b9d7f14 100644
> --- a/drivers/cpufreq/cpufreq_conservative.c
> +++ b/drivers/cpufreq/cpufreq_conservative.c
> @@ -122,7 +122,8 @@ static void cs_dbs_timer(struct work_struct *work)
>
> dbs_check_cpu(&cs_dbs_data, cpu);
>
> - schedule_delayed_work_on(cpu, &dbs_info->cdbs.work, delay);
> + schedule_delayed_work_on(smp_processor_id(), &dbs_info->cdbs.work,
> + delay);
> mutex_unlock(&dbs_info->cdbs.timer_mutex);
> }
>
> diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
> index 6c5f1d3..a00f02d 100644
> --- a/drivers/cpufreq/cpufreq_governor.c
> +++ b/drivers/cpufreq/cpufreq_governor.c
> @@ -161,13 +161,31 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu)
> }
> EXPORT_SYMBOL_GPL(dbs_check_cpu);
>
> +bool dbs_sw_coordinated_cpus(struct cpu_dbs_common_info *cdbs)
> +{
> + struct cpufreq_policy *policy = cdbs->cur_policy;
> +
> + return cpumask_weight(policy->cpus) > 1;
> +}
> +EXPORT_SYMBOL_GPL(dbs_sw_coordinated_cpus);
> +
> static inline void dbs_timer_init(struct dbs_data *dbs_data,
> - struct cpu_dbs_common_info *cdbs, unsigned int sampling_rate)
> + struct cpu_dbs_common_info *cdbs,
> + unsigned int sampling_rate,
> + int cpu)
> {
> int delay = delay_for_sampling_rate(sampling_rate);
> + struct cpu_dbs_common_info *cdbs_local = dbs_data->get_cpu_cdbs(cpu);
> + struct od_cpu_dbs_info_s *od_dbs_info;
> +
> + cancel_delayed_work_sync(&cdbs_local->work);
> +
> + if (dbs_data->governor == GOV_ONDEMAND) {
> + od_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu);
> + od_dbs_info->sample_type = OD_NORMAL_SAMPLE;
> + }

The patch looks good in general except for the special case above.

Why exactly is it necessary?

Rafael


--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.

2012-11-28 10:51:31

by Fabio Baltieri

[permalink] [raw]
Subject: Re: [PATCH 1/5] cpufreq: handle SW coordinated CPUs

Hello Rafael,

On Tue, Nov 27, 2012 at 11:05:52PM +0100, Rafael J. Wysocki wrote:
> > static inline void dbs_timer_init(struct dbs_data *dbs_data,
> > - struct cpu_dbs_common_info *cdbs, unsigned int sampling_rate)
> > + struct cpu_dbs_common_info *cdbs,
> > + unsigned int sampling_rate,
> > + int cpu)
> > {
> > int delay = delay_for_sampling_rate(sampling_rate);
> > + struct cpu_dbs_common_info *cdbs_local = dbs_data->get_cpu_cdbs(cpu);
> > + struct od_cpu_dbs_info_s *od_dbs_info;
> > +
> > + cancel_delayed_work_sync(&cdbs_local->work);
> > +
> > + if (dbs_data->governor == GOV_ONDEMAND) {
> > + od_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu);
> > + od_dbs_info->sample_type = OD_NORMAL_SAMPLE;
> > + }
>
> The patch looks good in general except for the special case above.
>
> Why exactly is it necessary?

Now that you point it out... it's not! It was part of ondemand init and
moved in cpufreq_governor_dbs, I forgot to take it out the way.

Also, I think that cancel_delayed_work_sync can be removed too.

Should I send an updated version as soon as I get an ack for the other
patches in the series or do you want me to wait until 3.8-rc1?

Thanks,
Fabio

--
Fabio Baltieri

2012-11-28 12:45:11

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 1/5] cpufreq: handle SW coordinated CPUs

On Wednesday, November 28, 2012 11:51:20 AM Fabio Baltieri wrote:
> Hello Rafael,

Hi,

> On Tue, Nov 27, 2012 at 11:05:52PM +0100, Rafael J. Wysocki wrote:
> > > static inline void dbs_timer_init(struct dbs_data *dbs_data,
> > > - struct cpu_dbs_common_info *cdbs, unsigned int sampling_rate)
> > > + struct cpu_dbs_common_info *cdbs,
> > > + unsigned int sampling_rate,
> > > + int cpu)
> > > {
> > > int delay = delay_for_sampling_rate(sampling_rate);
> > > + struct cpu_dbs_common_info *cdbs_local = dbs_data->get_cpu_cdbs(cpu);
> > > + struct od_cpu_dbs_info_s *od_dbs_info;
> > > +
> > > + cancel_delayed_work_sync(&cdbs_local->work);
> > > +
> > > + if (dbs_data->governor == GOV_ONDEMAND) {
> > > + od_dbs_info = dbs_data->get_cpu_dbs_info_s(cpu);
> > > + od_dbs_info->sample_type = OD_NORMAL_SAMPLE;
> > > + }
> >
> > The patch looks good in general except for the special case above.
> >
> > Why exactly is it necessary?
>
> Now that you point it out... it's not! It was part of ondemand init and
> moved in cpufreq_governor_dbs, I forgot to take it out the way.
>
> Also, I think that cancel_delayed_work_sync can be removed too.
>
> Should I send an updated version as soon as I get an ack for the other
> patches in the series or do you want me to wait until 3.8-rc1?

Well, if it's not very urgent, I'd prefer it to wait a bit longer,
get some more testing and so on.

Thanks,
Rafael


--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.