2014-04-27 22:12:30

by Stratos Karafotis

[permalink] [raw]
Subject: [PATCH] cpufreq: intel_pstate: Change the calculation of next pstate

Currently the driver calculates the next pstate proportional to
core_busy factor and reverse proportional to current pstate.

Change the above method and calculate the next pstate independently
of current pstate.

Tested on Intel i7-3770 CPU @ 3.40GHz.
Phoronix benchmark of Linux Kernel Compilation 3.1 test shows an
increase ~1.5% in performance. Below the test results using turbostat
(5 iterations):

Without patch:

Ph. avg Time Total time PkgWatt Total Energy
79.63 266.416 57.74 15382.85984
79.63 265.609 57.87 15370.79283
79.57 266.994 57.54 15362.83476
79.53 265.304 57.83 15342.53032
79.71 265.977 57.76 15362.83152
avg 79.61 266.06 57.74 15364.36985

With patch:

Ph. avg Time Total time PkgWatt Total Energy
78.23 258.826 59.14 15306.96964
78.41 259.110 59.15 15326.35650
78.40 258.530 59.26 15320.48780
78.46 258.673 59.20 15313.44160
78.19 259.075 59.16 15326.87700
avg 78.34 258.842 59.18 15318.82650

The total test time reduced by ~2.6%, while the total energy
consumption during a test iteration reduced by ~0.35%

Signed-off-by: Stratos Karafotis <[email protected]>
---
drivers/cpufreq/intel_pstate.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 0999673..8e309db 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -608,28 +608,27 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
mod_timer_pinned(&cpu->timer, jiffies + delay);
}

-static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu)
+static inline int32_t intel_pstate_get_busy(struct cpudata *cpu)
{
- int32_t core_busy, max_pstate, current_pstate;
+ int32_t core_busy, max_pstate;

core_busy = cpu->sample.core_pct_busy;
max_pstate = int_tofp(cpu->pstate.max_pstate);
- current_pstate = int_tofp(cpu->pstate.current_pstate);
- core_busy = mul_fp(core_busy, div_fp(max_pstate, current_pstate));
+ core_busy = mul_fp(core_busy, max_pstate);
return FP_ROUNDUP(core_busy);
}

static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
{
- int32_t busy_scaled;
+ int32_t busy;
struct _pid *pid;
signed int ctl = 0;
int steps;

pid = &cpu->pid;
- busy_scaled = intel_pstate_get_scaled_busy(cpu);
+ busy = intel_pstate_get_busy(cpu);

- ctl = pid_calc(pid, busy_scaled);
+ ctl = pid_calc(pid, busy);

steps = abs(ctl);

@@ -651,7 +650,7 @@ static void intel_pstate_timer_func(unsigned long __data)
intel_pstate_adjust_busy_pstate(cpu);

trace_pstate_sample(fp_toint(sample->core_pct_busy),
- fp_toint(intel_pstate_get_scaled_busy(cpu)),
+ fp_toint(intel_pstate_get_busy(cpu)),
cpu->pstate.current_pstate,
sample->mperf,
sample->aperf,
--
1.9.0


2014-04-29 04:58:06

by Viresh Kumar

[permalink] [raw]
Subject: Re: [PATCH] cpufreq: intel_pstate: Change the calculation of next pstate

Cc'd Dirk,

On 28 April 2014 03:42, Stratos Karafotis <[email protected]> wrote:
> Currently the driver calculates the next pstate proportional to
> core_busy factor and reverse proportional to current pstate.
>
> Change the above method and calculate the next pstate independently
> of current pstate.

We must mention why the change is required.

2014-04-29 16:34:50

by Stratos Karafotis

[permalink] [raw]
Subject: Re: [PATCH] cpufreq: intel_pstate: Change the calculation of next pstate

On 29/04/2014 07:58 πμ, Viresh Kumar wrote:
> Cc'd Dirk,
>
> On 28 April 2014 03:42, Stratos Karafotis <[email protected]> wrote:
>> Currently the driver calculates the next pstate proportional to
>> core_busy factor and reverse proportional to current pstate.
>>
>> Change the above method and calculate the next pstate independently
>> of current pstate.
>
> We must mention why the change is required.
>

Hi Viresh,

Actually, I can't say that it's required. :)
I just believe that calculation of next p-state should be independent
from current one. In my opinion we can't scale the load across different
p-states, because it's not always equivalent.

For example suppose a load of 100% because of a tight for loop in the
current p-state. It will be also a 100% load in any other p-state.
It will be wrong if we scale the load in the calculation formula
according to the current p-state.

I included the test results in the change log to point out an improvement
because of this patch.

I will enrich more the change log as you suggested.

Thanks,
Stratos Karafotis

2014-04-29 21:35:51

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH] cpufreq: intel_pstate: Change the calculation of next pstate

On Tuesday, April 29, 2014 07:34:46 PM Stratos Karafotis wrote:
> On 29/04/2014 07:58 πμ, Viresh Kumar wrote:
> > Cc'd Dirk,
> >
> > On 28 April 2014 03:42, Stratos Karafotis <[email protected]> wrote:
> >> Currently the driver calculates the next pstate proportional to
> >> core_busy factor and reverse proportional to current pstate.
> >>
> >> Change the above method and calculate the next pstate independently
> >> of current pstate.
> >
> > We must mention why the change is required.
> >
>
> Hi Viresh,
>
> Actually, I can't say that it's required. :)
> I just believe that calculation of next p-state should be independent
> from current one. In my opinion we can't scale the load across different
> p-states, because it's not always equivalent.
>
> For example suppose a load of 100% because of a tight for loop in the
> current p-state. It will be also a 100% load in any other p-state.
> It will be wrong if we scale the load in the calculation formula
> according to the current p-state.
>
> I included the test results in the change log to point out an improvement
> because of this patch.
>
> I will enrich more the change log as you suggested.

Please do so.

Also, we need to take your patch to our power lab and see if we can reproduce
your results in other workloads.

And I'm waiting for the intel_pstate developer Dirk Brandewie to comment.

Thanks!

--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.