2019-12-06 16:16:39

by Peng Wang

[permalink] [raw]
Subject: [PATCH] schied/fair: Skip updating "contrib" without load

We only update load_sum/runnable_load_sum/util_sum with
decayed old sum when load is clear.

Signed-off-by: Peng Wang <[email protected]>
---
kernel/sched/pelt.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index a96db50..4392953 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
* Step 2
*/
delta %= 1024;
- contrib = __accumulate_pelt_segments(periods,
- 1024 - sa->period_contrib, delta);
+ if (load)
+ contrib = __accumulate_pelt_segments(periods,
+ 1024 - sa->period_contrib, delta);
}
sa->period_contrib = delta;

--
1.8.3.1


2019-12-09 16:17:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] schied/fair: Skip updating "contrib" without load

On Sat, Dec 07, 2019 at 12:14:22AM +0800, Peng Wang wrote:
> We only update load_sum/runnable_load_sum/util_sum with
> decayed old sum when load is clear.

What you're saying is that because of the:

if (!load)
runnable = running = 0;

clause in ___update_load_sum(), all the actual users of @contrib in
accumulate_sum():

if (load)
sa->load_sum += load * contrib;
if (runnable)
sa->runnable_load_sum += runnable * contrib;
if (running)
sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;

don't happen, and therefore we don't care what @contrib actually is and
calculating it is pointless.

I suppose that is so. did you happen to have performance numbers? Also,
I'm thinking this wants a comment.

> Signed-off-by: Peng Wang <[email protected]>
> ---
> kernel/sched/pelt.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index a96db50..4392953 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
> * Step 2
> */
> delta %= 1024;
> - contrib = __accumulate_pelt_segments(periods,
> - 1024 - sa->period_contrib, delta);
> + if (load)
> + contrib = __accumulate_pelt_segments(periods,
> + 1024 - sa->period_contrib, delta);
> }
> sa->period_contrib = delta;
>
> --
> 1.8.3.1
>

2019-12-11 12:18:18

by Peng Wang

[permalink] [raw]
Subject: Re: [PATCH] schied/fair: Skip updating "contrib" without load

On Mon, Dec 09, 2019 at 05:16:27PM +0100, Peter Zijlstra wrote:
> On Sat, Dec 07, 2019 at 12:14:22AM +0800, Peng Wang wrote:
> > We only update load_sum/runnable_load_sum/util_sum with
> > decayed old sum when load is clear.
>
> What you're saying is that because of the:
>
> if (!load)
> runnable = running = 0;
>
> clause in ___update_load_sum(), all the actual users of @contrib in
> accumulate_sum():
>
> if (load)
> sa->load_sum += load * contrib;
> if (runnable)
> sa->runnable_load_sum += runnable * contrib;
> if (running)
> sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
>
> don't happen, and therefore we don't care what @contrib actually is and
> calculating it is pointless.

Yes.

>
> I suppose that is so. did you happen to have performance numbers? Also,
> I'm thinking this wants a comment.

Actually I don't know how to get the exact performance data.
But I count the times when @load equals zero and not as below:

if (load) {
load_is_not_zero_count++;
contrib = __accumulate_pelt_segments(periods,
1024 - sa->period_contrib, delta);
} else
load_is_zero_count++;

As we can see, load_is_zero_count is much bigger than
load_is_zero_count, and the gap is gradually widening.

load_is_zero_count: 6016044 times
load_is_not_zero_count: 244316 times
19:50:43 up 1 min, 1 user, load average: 0.09, 0.06, 0.02

load_is_zero_count: 7956168 times
load_is_not_zero_count: 261472 times
19:51:42 up 2 min, 1 user, load average: 0.03, 0.05, 0.01

load_is_zero_count: 10199896 times
load_is_not_zero_count: 278364 times
19:52:51 up 3 min, 1 user, load average: 0.06, 0.05, 0.01

load_is_zero_count: 14333700 times
load_is_not_zero_count: 318424 times
19:54:53 up 5 min, 1 user, load average: 0.01, 0.03, 0.00

Perhaps we can gain some performance advantage by saving these unnecessary calculation.

>
> > Signed-off-by: Peng Wang <[email protected]>
> > ---
> > kernel/sched/pelt.c | 5 +++--
> > 1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> > index a96db50..4392953 100644
> > --- a/kernel/sched/pelt.c
> > +++ b/kernel/sched/pelt.c
> > @@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
> > * Step 2
> > */
> > delta %= 1024;
> > - contrib = __accumulate_pelt_segments(periods,
> > - 1024 - sa->period_contrib, delta);
> > + if (load)
> > + contrib = __accumulate_pelt_segments(periods,
> > + 1024 - sa->period_contrib, delta);
> > }
> > sa->period_contrib = delta;
> >
> > --
> > 1.8.3.1
> >

2019-12-13 04:05:31

by Peng Wang

[permalink] [raw]
Subject: [PATCH v2] schied/fair: Skip calculating @contrib without load

Because of the:

if (!load)
runnable = running = 0;

clause in ___update_load_sum(), all the actual users of @contrib in
accumulate_sum():

if (load)
sa->load_sum += load * contrib;
if (runnable)
sa->runnable_load_sum += runnable * contrib;
if (running)
sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;

don't happen, and therefore we don't care what @contrib actually is and
calculating it is pointless.

If we count the times when @load equals zero and not as below:

if (load) {
load_is_not_zero_count++;
contrib = __accumulate_pelt_segments(periods,
1024 - sa->period_contrib,delta);
} else
load_is_zero_count++;

As we can see, load_is_zero_count is much bigger than
load_is_zero_count, and the gap is gradually widening:

load_is_zero_count: 6016044 times
load_is_not_zero_count: 244316 times
19:50:43 up 1 min, 1 user, load average: 0.09, 0.06, 0.02

load_is_zero_count: 7956168 times
load_is_not_zero_count: 261472 times
19:51:42 up 2 min, 1 user, load average: 0.03, 0.05, 0.01

load_is_zero_count: 10199896 times
load_is_not_zero_count: 278364 times
19:52:51 up 3 min, 1 user, load average: 0.06, 0.05, 0.01

load_is_zero_count: 14333700 times
load_is_not_zero_count: 318424 times
19:54:53 up 5 min, 1 user, load average: 0.01, 0.03, 0.00

Perhaps we can gain some performance advantage by saving these
unnecessary calculation.

Signed-off-by: Peng Wang <[email protected]>
---
kernel/sched/pelt.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index a96db50..4392953 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
* Step 2
*/
delta %= 1024;
- contrib = __accumulate_pelt_segments(periods,
- 1024 - sa->period_contrib, delta);
+ if (load)
+ contrib = __accumulate_pelt_segments(periods,
+ 1024 - sa->period_contrib, delta);
}
sa->period_contrib = delta;

--
1.8.3.1

2019-12-13 09:24:13

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2] schied/fair: Skip calculating @contrib without load

Hi Peng,

minor typo on the subject s/schied/sched/

On Fri, 13 Dec 2019 at 04:46, Peng Wang <[email protected]> wrote:
>
> Because of the:
>
> if (!load)
> runnable = running = 0;
>
> clause in ___update_load_sum(), all the actual users of @contrib in
> accumulate_sum():
>
> if (load)
> sa->load_sum += load * contrib;
> if (runnable)
> sa->runnable_load_sum += runnable * contrib;
> if (running)
> sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
>
> don't happen, and therefore we don't care what @contrib actually is and
> calculating it is pointless.
>
> If we count the times when @load equals zero and not as below:
>
> if (load) {
> load_is_not_zero_count++;
> contrib = __accumulate_pelt_segments(periods,
> 1024 - sa->period_contrib,delta);
> } else
> load_is_zero_count++;
>
> As we can see, load_is_zero_count is much bigger than
> load_is_zero_count, and the gap is gradually widening:
>
> load_is_zero_count: 6016044 times
> load_is_not_zero_count: 244316 times
> 19:50:43 up 1 min, 1 user, load average: 0.09, 0.06, 0.02
>
> load_is_zero_count: 7956168 times
> load_is_not_zero_count: 261472 times
> 19:51:42 up 2 min, 1 user, load average: 0.03, 0.05, 0.01
>
> load_is_zero_count: 10199896 times
> load_is_not_zero_count: 278364 times
> 19:52:51 up 3 min, 1 user, load average: 0.06, 0.05, 0.01
>
> load_is_zero_count: 14333700 times
> load_is_not_zero_count: 318424 times
> 19:54:53 up 5 min, 1 user, load average: 0.01, 0.03, 0.00

your system looks pretty idle so i'm not sure to see a benefit of your
patch in such situation

>
> Perhaps we can gain some performance advantage by saving these
> unnecessary calculation.

load == 0 when
- system is idle and we updates blocked load but we don't really care
about performance in this case and we will stop the trigger the update
once the load_avg reach null value
- a rt/dl/cfs rq or a sched_entity wakes up. In this case, skipping
the calculation of the contribution should fasten the wake up path
although i'm not sure about the amount

Nevertheless, this change makes sense

Reviewed-by: Vincent Guittot < [email protected]>

>
> Signed-off-by: Peng Wang <[email protected]>
> ---
> kernel/sched/pelt.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index a96db50..4392953 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
> * Step 2
> */
> delta %= 1024;
> - contrib = __accumulate_pelt_segments(periods,
> - 1024 - sa->period_contrib, delta);
> + if (load)
> + contrib = __accumulate_pelt_segments(periods,
> + 1024 - sa->period_contrib, delta);
> }
> sa->period_contrib = delta;
>
> --
> 1.8.3.1
>

2019-12-13 12:14:24

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2] schied/fair: Skip calculating @contrib without load

On Fri, Dec 13, 2019 at 11:45:40AM +0800, Peng Wang wrote:
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index a96db50..4392953 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
> * Step 2
> */
> delta %= 1024;
> - contrib = __accumulate_pelt_segments(periods,
> - 1024 - sa->period_contrib, delta);
> + if (load)
> + contrib = __accumulate_pelt_segments(periods,
> + 1024 - sa->period_contrib, delta);
> }
> sa->period_contrib = delta;

I've made that:

--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -129,8 +129,20 @@ accumulate_sum(u64 delta, struct sched_a
* Step 2
*/
delta %= 1024;
- contrib = __accumulate_pelt_segments(periods,
- 1024 - sa->period_contrib, delta);
+ if (load) {
+ /*
+ * This relies on the:
+ *
+ * if (!load)
+ * runnable = running = 0;
+ *
+ * clause from ___update_load_sum(); this results in
+ * the below usage of @contrib to dissapear entirely,
+ * so no point in calculating it.
+ */
+ contrib = __accumulate_pelt_segments(periods,
+ 1024 - sa->period_contrib, delta);
+ }
}
sa->period_contrib = delta;

@@ -205,7 +217,9 @@ ___update_load_sum(u64 now, struct sched
* This means that weight will be 0 but not running for a sched_entity
* but also for a cfs_rq if the latter becomes idle. As an example,
* this happens during idle_balance() which calls
- * update_blocked_averages()
+ * update_blocked_averages().
+ *
+ * Also see the comment in accumulate_sum().
*/
if (!load)
runnable = running = 0;