2011-06-21 07:21:34

by Paul Turner

[permalink] [raw]
Subject: [patch 00/16] CFS Bandwidth Control v7

Hi all,

Please find attached the latest iteration of bandwidth control (v7).

This release continues the scouring started in v5 and v6, this time with
attention paid to timers, quota expiration, and the reclaim path.

Thanks to Hidetoshi Seto for taking the time to review the previous series.

v7
------------
optimizations/tweaks:
- no need to reschedule on an enqueue_throttle
- bandwidth is reclaimed at time of dequeue rather than put_prev_entity, this
prevents us losing small slices of bandwidth to load-balance movement.

quota/period handling:
- runtime expiration now handles sched_clock wrap
- bandwidth now reclaimed at time of dequeue rather than put_prev_entity, this
was resulting in load-balance stranding small amounts of bandwidht
perviously.
- logic for handling the bandwidth timer is now better unified with idle state
accounting, races with period expiration during hrtimer tear-down resolved
- fixed wake-up into a new quota period waiting for timer to replenish
bandwidth.

misc:
- fixed stats not being accumulated for unthrottled periods [thanks H. Sato]
- fixed nr_running corruption in enqueue/dequeue_task fair [thanks H. Sato]
- consistent specification changed to max(child bandwidth) <= parent
bandwidth, sysctl controlling this behavior was nuked
- throttling not enabled until both throttle and unthrottle mechanisms are in
place.
- bunch of minor cleanups per list discussion

Hideotoshi, the following patches changed enough, or are new, and should be
looked over again before I can re-add your Reviewed-by.

[patch 04/16] sched: validate CFS quota hierarchies
[patch 06/16] sched: add a timer to handle CFS bandwidth refresh
[patch 07/16] sched: expire invalid runtime
[patch 10/16] sched: throttle entities exceeding their allowed bandwidth
[patch 15/16] sched: return unused runtime on voluntary sleep

Previous postings:
-----------------
v6: http://lkml.org/lkml/2011/5/7/37
v5: http://lkml.org/lkml/2011/3 /22/477
v4: http://lkml.org/lkml/2011/2/23/44
v3: http://lkml.org/lkml/2010/10/12/44:
v2: http://lkml.org/lkml/2010/4/28/88
Original posting: http://lkml.org/lkml/2010/2/12/393
Prior approaches: http://lkml.org/lkml/2010/1/5/44 ["CFS Hard limits v5"]

Let me know if anything's busted :)

- Paul



2011-06-22 10:05:52

by Hidetoshi Seto

[permalink] [raw]
Subject: Re: [patch 00/16] CFS Bandwidth Control v7

(2011/06/21 16:16), Paul Turner wrote:
> Hideotoshi, the following patches changed enough, or are new, and should be
> looked over again before I can re-add your Reviewed-by.
>
> [patch 04/16] sched: validate CFS quota hierarchies
> [patch 06/16] sched: add a timer to handle CFS bandwidth refresh
> [patch 07/16] sched: expire invalid runtime
> [patch 10/16] sched: throttle entities exceeding their allowed bandwidth
> [patch 15/16] sched: return unused runtime on voluntary sleep

Done.

Thank you very much again for your great work!

I'll continue my test/benchmark on this v7 for a while.
Though I believe no more bug is there, I'll let you know if there is something.

I think it's about time to have this set in upstream branch (at first, tip:sched/???).
(How about having an update with correct words (say, v7.1?) within a few days?)


Thanks,
H.Seto

2011-06-23 12:07:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [patch 00/16] CFS Bandwidth Control v7

On Wed, 2011-06-22 at 19:05 +0900, Hidetoshi Seto wrote:
> I'll continue my test/benchmark on this v7 for a while.
> Though I believe no more bug is there, I'll let you know if there is
> something.

Would that testing include performance of a kernel without these patches
vs one with these patches in a configuration where the new feature is
compiled in but not used?

It does add a number of if (!cfs_rq->runtime_enabled) return branches
all over the place, some possibly inside a function call (depending on
what the auto-inliner does). So while the impact should be minimal, it
would be very good to test it is indeed so.

2011-06-23 12:43:43

by Ingo Molnar

[permalink] [raw]
Subject: Re: [patch 00/16] CFS Bandwidth Control v7


* Peter Zijlstra <[email protected]> wrote:

> On Wed, 2011-06-22 at 19:05 +0900, Hidetoshi Seto wrote:
>
> > I'll continue my test/benchmark on this v7 for a while. Though I
> > believe no more bug is there, I'll let you know if there is
> > something.
>
> Would that testing include performance of a kernel without these
> patches vs one with these patches in a configuration where the new
> feature is compiled in but not used?
>
> It does add a number of if (!cfs_rq->runtime_enabled) return
> branches all over the place, some possibly inside a function call
> (depending on what the auto-inliner does). So while the impact
> should be minimal, it would be very good to test it is indeed so.

Yeah, doing such performance tests is absolutely required. Branches
and instructions impact should be measured as well, beyond the cycles
impact.

The changelog of this recent commit:

c8b281161dfa: sched: Increase SCHED_LOAD_SCALE resolution

gives an example of how to do such measurements.

Thanks,

Ingo

2011-06-24 05:11:55

by Hidetoshi Seto

[permalink] [raw]
Subject: Re: [patch 00/16] CFS Bandwidth Control v7

(2011/06/23 21:43), Ingo Molnar wrote:
>
> * Peter Zijlstra <[email protected]> wrote:
>
>> On Wed, 2011-06-22 at 19:05 +0900, Hidetoshi Seto wrote:
>>
>>> I'll continue my test/benchmark on this v7 for a while. Though I
>>> believe no more bug is there, I'll let you know if there is
>>> something.
>>
>> Would that testing include performance of a kernel without these
>> patches vs one with these patches in a configuration where the new
>> feature is compiled in but not used?
>>
>> It does add a number of if (!cfs_rq->runtime_enabled) return
>> branches all over the place, some possibly inside a function call
>> (depending on what the auto-inliner does). So while the impact
>> should be minimal, it would be very good to test it is indeed so.
>
> Yeah, doing such performance tests is absolutely required. Branches
> and instructions impact should be measured as well, beyond the cycles
> impact.
>
> The changelog of this recent commit:
>
> c8b281161dfa: sched: Increase SCHED_LOAD_SCALE resolution
>
> gives an example of how to do such measurements.

Thank you for useful guidance!

I've run pipe-test-100k on both of a kernel without patches (3.0-rc4)
and one with patches (3.0-rc4+), in similar way as that described in
the change log you pointed (but I add "-d" for more details).

I sampled 4 results for each: repeat 10 times * 3 + repeat 200 times * 1.
Cgroups are not used in both, therefore of course CFS bandwidth control
is not used in one that have patched. Results are archived and attached.

Here is a comparison in diff style:

=====
--- /home/seto/bwc-pipe-test/bwc-rc4-orig.txt 2011-06-24 11:52:16.000000000 +0900
+++ /home/seto/bwc-pipe-test/bwc-rc4-patched.txt 2011-06-24 12:08:32.000000000 +0900
[seto@SIRIUS-F14 perf]$ taskset 1 ./perf stat -d -d -d --repeat 200 ../../../pipe-test-100k

Performance counter stats for '../../../pipe-test-100k' (200 runs):

- 865.139070 task-clock # 0.468 CPUs utilized ( +- 0.22% )
- 200,167 context-switches # 0.231 M/sec ( +- 0.00% )
- 0 CPU-migrations # 0.000 M/sec ( +- 49.62% )
- 142 page-faults # 0.000 M/sec ( +- 0.07% )
- 1,671,107,623 cycles # 1.932 GHz ( +- 0.16% ) [28.23%]
- 838,554,329 stalled-cycles-frontend # 50.18% frontend cycles idle ( +- 0.27% ) [28.21%]
- 453,526,560 stalled-cycles-backend # 27.14% backend cycles idle ( +- 0.43% ) [28.33%]
- 1,434,140,915 instructions # 0.86 insns per cycle
- # 0.58 stalled cycles per insn ( +- 0.06% ) [34.01%]
- 279,485,621 branches # 323.053 M/sec ( +- 0.06% ) [33.98%]
- 6,653,998 branch-misses # 2.38% of all branches ( +- 0.16% ) [33.93%]
- 495,463,378 L1-dcache-loads # 572.698 M/sec ( +- 0.05% ) [28.12%]
- 27,903,270 L1-dcache-load-misses # 5.63% of all L1-dcache hits ( +- 0.28% ) [27.84%]
- 885,210 LLC-loads # 1.023 M/sec ( +- 3.21% ) [21.80%]
- 9,479 LLC-load-misses # 1.07% of all LL-cache hits ( +- 0.63% ) [ 5.61%]
- 830,096,007 L1-icache-loads # 959.494 M/sec ( +- 0.08% ) [11.18%]
- 123,728,370 L1-icache-load-misses # 14.91% of all L1-icache hits ( +- 0.06% ) [16.78%]
- 504,932,490 dTLB-loads # 583.643 M/sec ( +- 0.06% ) [22.30%]
- 2,056,069 dTLB-load-misses # 0.41% of all dTLB cache hits ( +- 2.23% ) [22.20%]
- 1,579,410,083 iTLB-loads # 1825.614 M/sec ( +- 0.06% ) [22.30%]
- 394,739 iTLB-load-misses # 0.02% of all iTLB cache hits ( +- 0.03% ) [22.27%]
- 2,286,363 L1-dcache-prefetches # 2.643 M/sec ( +- 0.72% ) [22.40%]
- 776,096 L1-dcache-prefetch-misses # 0.897 M/sec ( +- 1.45% ) [22.54%]
+ 859.259725 task-clock # 0.472 CPUs utilized ( +- 0.24% )
+ 200,165 context-switches # 0.233 M/sec ( +- 0.00% )
+ 0 CPU-migrations # 0.000 M/sec ( +-100.00% )
+ 142 page-faults # 0.000 M/sec ( +- 0.06% )
+ 1,659,371,974 cycles # 1.931 GHz ( +- 0.18% ) [28.23%]
+ 829,806,955 stalled-cycles-frontend # 50.01% frontend cycles idle ( +- 0.32% ) [28.32%]
+ 490,316,435 stalled-cycles-backend # 29.55% backend cycles idle ( +- 0.46% ) [28.34%]
+ 1,445,166,061 instructions # 0.87 insns per cycle
+ # 0.57 stalled cycles per insn ( +- 0.06% ) [34.01%]
+ 282,370,988 branches # 328.621 M/sec ( +- 0.06% ) [33.93%]
+ 5,056,568 branch-misses # 1.79% of all branches ( +- 0.19% ) [33.94%]
+ 500,660,789 L1-dcache-loads # 582.665 M/sec ( +- 0.06% ) [28.05%]
+ 26,802,313 L1-dcache-load-misses # 5.35% of all L1-dcache hits ( +- 0.26% ) [27.83%]
+ 872,571 LLC-loads # 1.015 M/sec ( +- 3.73% ) [21.82%]
+ 9,050 LLC-load-misses # 1.04% of all LL-cache hits ( +- 0.55% ) [ 5.70%]
+ 794,396,111 L1-icache-loads # 924.512 M/sec ( +- 0.06% ) [11.30%]
+ 130,179,414 L1-icache-load-misses # 16.39% of all L1-icache hits ( +- 0.09% ) [16.85%]
+ 511,119,889 dTLB-loads # 594.837 M/sec ( +- 0.06% ) [22.37%]
+ 2,452,378 dTLB-load-misses # 0.48% of all dTLB cache hits ( +- 2.31% ) [22.14%]
+ 1,597,897,243 iTLB-loads # 1859.621 M/sec ( +- 0.06% ) [22.17%]
+ 394,366 iTLB-load-misses # 0.02% of all iTLB cache hits ( +- 0.03% ) [22.24%]
+ 1,897,401 L1-dcache-prefetches # 2.208 M/sec ( +- 0.64% ) [22.38%]
+ 879,391 L1-dcache-prefetch-misses # 1.023 M/sec ( +- 0.90% ) [22.54%]

- 1.847093132 seconds time elapsed ( +- 0.19% )
+ 1.822131534 seconds time elapsed ( +- 0.21% )
=====

As Peter have expected, the number of branches is slightly increased.

- 279,485,621 branches # 323.053 M/sec ( +- 0.06% ) [33.98%]
+ 282,370,988 branches # 328.621 M/sec ( +- 0.06% ) [33.93%]

However, looking overall, I think there is no significant problem on
the score with this patch set. I'd love to hear from maintainers.


Thanks,
H.Seto


Attachments:
bwc-pipe-test.tar.bz2 (5.00 kB)

2011-06-26 10:35:56

by Ingo Molnar

[permalink] [raw]
Subject: Re: [patch 00/16] CFS Bandwidth Control v7


* Hidetoshi Seto <[email protected]> wrote:

> - 865.139070 task-clock # 0.468 CPUs utilized ( +- 0.22% )
> - 200,167 context-switches # 0.231 M/sec ( +- 0.00% )
> - 0 CPU-migrations # 0.000 M/sec ( +- 49.62% )
> - 142 page-faults # 0.000 M/sec ( +- 0.07% )
> - 1,671,107,623 cycles # 1.932 GHz ( +- 0.16% ) [28.23%]
> - 838,554,329 stalled-cycles-frontend # 50.18% frontend cycles idle ( +- 0.27% ) [28.21%]
> - 453,526,560 stalled-cycles-backend # 27.14% backend cycles idle ( +- 0.43% ) [28.33%]
> - 1,434,140,915 instructions # 0.86 insns per cycle
> - # 0.58 stalled cycles per insn ( +- 0.06% ) [34.01%]
> - 279,485,621 branches # 323.053 M/sec ( +- 0.06% ) [33.98%]
> - 6,653,998 branch-misses # 2.38% of all branches ( +- 0.16% ) [33.93%]
> - 495,463,378 L1-dcache-loads # 572.698 M/sec ( +- 0.05% ) [28.12%]
> - 27,903,270 L1-dcache-load-misses # 5.63% of all L1-dcache hits ( +- 0.28% ) [27.84%]
> - 885,210 LLC-loads # 1.023 M/sec ( +- 3.21% ) [21.80%]
> - 9,479 LLC-load-misses # 1.07% of all LL-cache hits ( +- 0.63% ) [ 5.61%]
> - 830,096,007 L1-icache-loads # 959.494 M/sec ( +- 0.08% ) [11.18%]
> - 123,728,370 L1-icache-load-misses # 14.91% of all L1-icache hits ( +- 0.06% ) [16.78%]
> - 504,932,490 dTLB-loads # 583.643 M/sec ( +- 0.06% ) [22.30%]
> - 2,056,069 dTLB-load-misses # 0.41% of all dTLB cache hits ( +- 2.23% ) [22.20%]
> - 1,579,410,083 iTLB-loads # 1825.614 M/sec ( +- 0.06% ) [22.30%]
> - 394,739 iTLB-load-misses # 0.02% of all iTLB cache hits ( +- 0.03% ) [22.27%]
> - 2,286,363 L1-dcache-prefetches # 2.643 M/sec ( +- 0.72% ) [22.40%]
> - 776,096 L1-dcache-prefetch-misses # 0.897 M/sec ( +- 1.45% ) [22.54%]
> + 859.259725 task-clock # 0.472 CPUs utilized ( +- 0.24% )
> + 200,165 context-switches # 0.233 M/sec ( +- 0.00% )
> + 0 CPU-migrations # 0.000 M/sec ( +-100.00% )
> + 142 page-faults # 0.000 M/sec ( +- 0.06% )
> + 1,659,371,974 cycles # 1.931 GHz ( +- 0.18% ) [28.23%]
> + 829,806,955 stalled-cycles-frontend # 50.01% frontend cycles idle ( +- 0.32% ) [28.32%]
> + 490,316,435 stalled-cycles-backend # 29.55% backend cycles idle ( +- 0.46% ) [28.34%]
> + 1,445,166,061 instructions # 0.87 insns per cycle
> + # 0.57 stalled cycles per insn ( +- 0.06% ) [34.01%]
> + 282,370,988 branches # 328.621 M/sec ( +- 0.06% ) [33.93%]
> + 5,056,568 branch-misses # 1.79% of all branches ( +- 0.19% ) [33.94%]
> + 500,660,789 L1-dcache-loads # 582.665 M/sec ( +- 0.06% ) [28.05%]
> + 26,802,313 L1-dcache-load-misses # 5.35% of all L1-dcache hits ( +- 0.26% ) [27.83%]
> + 872,571 LLC-loads # 1.015 M/sec ( +- 3.73% ) [21.82%]
> + 9,050 LLC-load-misses # 1.04% of all LL-cache hits ( +- 0.55% ) [ 5.70%]
> + 794,396,111 L1-icache-loads # 924.512 M/sec ( +- 0.06% ) [11.30%]
> + 130,179,414 L1-icache-load-misses # 16.39% of all L1-icache hits ( +- 0.09% ) [16.85%]
> + 511,119,889 dTLB-loads # 594.837 M/sec ( +- 0.06% ) [22.37%]
> + 2,452,378 dTLB-load-misses # 0.48% of all dTLB cache hits ( +- 2.31% ) [22.14%]
> + 1,597,897,243 iTLB-loads # 1859.621 M/sec ( +- 0.06% ) [22.17%]
> + 394,366 iTLB-load-misses # 0.02% of all iTLB cache hits ( +- 0.03% ) [22.24%]
> + 1,897,401 L1-dcache-prefetches # 2.208 M/sec ( +- 0.64% ) [22.38%]
> + 879,391 L1-dcache-prefetch-misses # 1.023 M/sec ( +- 0.90% ) [22.54%]
>
> - 1.847093132 seconds time elapsed ( +- 0.19% )
> + 1.822131534 seconds time elapsed ( +- 0.21% )
> =====
>
> As Peter have expected, the number of branches is slightly increased.
>
> - 279,485,621 branches # 323.053 M/sec ( +- 0.06% ) [33.98%]
> + 282,370,988 branches # 328.621 M/sec ( +- 0.06% ) [33.93%]
>
> However, looking overall, I think there is no significant problem on
> the score with this patch set. I'd love to hear from maintainers.

Yeah, these numbers look pretty good. Note that the percentages in
the third column (the amount of time that particular event was
measured) is pretty low, and it would be nice to eliminate it: i.e.
now that we know the ballpark figures do very precise measurements
that do not over-commit the PMU.

One such measurement would be:

-e cycles -e instructions -e branches

This should also bring the stddev percentages down i think, to below
0.1%.

Another measurement would be to test not just the feature-enabled but
also the feature-disabled cost - so that we document the rough
overhead that users of this new scheduler feature should expect.

Organizing it into neat before/after numbers and percentages,
comparing it with noise (stddev) [i.e. determining that the effect we
measure is above noise] and putting it all into the changelog would
be the other goal of these measurements.

Thanks,

Ingo

2011-06-29 04:06:04

by Hu Tao

[permalink] [raw]
Subject: Re: [patch 00/16] CFS Bandwidth Control v7

On Sun, Jun 26, 2011 at 12:35:26PM +0200, Ingo Molnar wrote:
>
> * Hidetoshi Seto <[email protected]> wrote:
>
> > - 865.139070 task-clock # 0.468 CPUs utilized ( +- 0.22% )
> > - 200,167 context-switches # 0.231 M/sec ( +- 0.00% )
> > - 0 CPU-migrations # 0.000 M/sec ( +- 49.62% )
> > - 142 page-faults # 0.000 M/sec ( +- 0.07% )
> > - 1,671,107,623 cycles # 1.932 GHz ( +- 0.16% ) [28.23%]
> > - 838,554,329 stalled-cycles-frontend # 50.18% frontend cycles idle ( +- 0.27% ) [28.21%]
> > - 453,526,560 stalled-cycles-backend # 27.14% backend cycles idle ( +- 0.43% ) [28.33%]
> > - 1,434,140,915 instructions # 0.86 insns per cycle
> > - # 0.58 stalled cycles per insn ( +- 0.06% ) [34.01%]
> > - 279,485,621 branches # 323.053 M/sec ( +- 0.06% ) [33.98%]
> > - 6,653,998 branch-misses # 2.38% of all branches ( +- 0.16% ) [33.93%]
> > - 495,463,378 L1-dcache-loads # 572.698 M/sec ( +- 0.05% ) [28.12%]
> > - 27,903,270 L1-dcache-load-misses # 5.63% of all L1-dcache hits ( +- 0.28% ) [27.84%]
> > - 885,210 LLC-loads # 1.023 M/sec ( +- 3.21% ) [21.80%]
> > - 9,479 LLC-load-misses # 1.07% of all LL-cache hits ( +- 0.63% ) [ 5.61%]
> > - 830,096,007 L1-icache-loads # 959.494 M/sec ( +- 0.08% ) [11.18%]
> > - 123,728,370 L1-icache-load-misses # 14.91% of all L1-icache hits ( +- 0.06% ) [16.78%]
> > - 504,932,490 dTLB-loads # 583.643 M/sec ( +- 0.06% ) [22.30%]
> > - 2,056,069 dTLB-load-misses # 0.41% of all dTLB cache hits ( +- 2.23% ) [22.20%]
> > - 1,579,410,083 iTLB-loads # 1825.614 M/sec ( +- 0.06% ) [22.30%]
> > - 394,739 iTLB-load-misses # 0.02% of all iTLB cache hits ( +- 0.03% ) [22.27%]
> > - 2,286,363 L1-dcache-prefetches # 2.643 M/sec ( +- 0.72% ) [22.40%]
> > - 776,096 L1-dcache-prefetch-misses # 0.897 M/sec ( +- 1.45% ) [22.54%]
> > + 859.259725 task-clock # 0.472 CPUs utilized ( +- 0.24% )
> > + 200,165 context-switches # 0.233 M/sec ( +- 0.00% )
> > + 0 CPU-migrations # 0.000 M/sec ( +-100.00% )
> > + 142 page-faults # 0.000 M/sec ( +- 0.06% )
> > + 1,659,371,974 cycles # 1.931 GHz ( +- 0.18% ) [28.23%]
> > + 829,806,955 stalled-cycles-frontend # 50.01% frontend cycles idle ( +- 0.32% ) [28.32%]
> > + 490,316,435 stalled-cycles-backend # 29.55% backend cycles idle ( +- 0.46% ) [28.34%]
> > + 1,445,166,061 instructions # 0.87 insns per cycle
> > + # 0.57 stalled cycles per insn ( +- 0.06% ) [34.01%]
> > + 282,370,988 branches # 328.621 M/sec ( +- 0.06% ) [33.93%]
> > + 5,056,568 branch-misses # 1.79% of all branches ( +- 0.19% ) [33.94%]
> > + 500,660,789 L1-dcache-loads # 582.665 M/sec ( +- 0.06% ) [28.05%]
> > + 26,802,313 L1-dcache-load-misses # 5.35% of all L1-dcache hits ( +- 0.26% ) [27.83%]
> > + 872,571 LLC-loads # 1.015 M/sec ( +- 3.73% ) [21.82%]
> > + 9,050 LLC-load-misses # 1.04% of all LL-cache hits ( +- 0.55% ) [ 5.70%]
> > + 794,396,111 L1-icache-loads # 924.512 M/sec ( +- 0.06% ) [11.30%]
> > + 130,179,414 L1-icache-load-misses # 16.39% of all L1-icache hits ( +- 0.09% ) [16.85%]
> > + 511,119,889 dTLB-loads # 594.837 M/sec ( +- 0.06% ) [22.37%]
> > + 2,452,378 dTLB-load-misses # 0.48% of all dTLB cache hits ( +- 2.31% ) [22.14%]
> > + 1,597,897,243 iTLB-loads # 1859.621 M/sec ( +- 0.06% ) [22.17%]
> > + 394,366 iTLB-load-misses # 0.02% of all iTLB cache hits ( +- 0.03% ) [22.24%]
> > + 1,897,401 L1-dcache-prefetches # 2.208 M/sec ( +- 0.64% ) [22.38%]
> > + 879,391 L1-dcache-prefetch-misses # 1.023 M/sec ( +- 0.90% ) [22.54%]
> >
> > - 1.847093132 seconds time elapsed ( +- 0.19% )
> > + 1.822131534 seconds time elapsed ( +- 0.21% )
> > =====
> >
> > As Peter have expected, the number of branches is slightly increased.
> >
> > - 279,485,621 branches # 323.053 M/sec ( +- 0.06% ) [33.98%]
> > + 282,370,988 branches # 328.621 M/sec ( +- 0.06% ) [33.93%]
> >
> > However, looking overall, I think there is no significant problem on
> > the score with this patch set. I'd love to hear from maintainers.
>
> Yeah, these numbers look pretty good. Note that the percentages in
> the third column (the amount of time that particular event was
> measured) is pretty low, and it would be nice to eliminate it: i.e.
> now that we know the ballpark figures do very precise measurements
> that do not over-commit the PMU.
>
> One such measurement would be:
>
> -e cycles -e instructions -e branches
>
> This should also bring the stddev percentages down i think, to below
> 0.1%.
>
> Another measurement would be to test not just the feature-enabled but
> also the feature-disabled cost - so that we document the rough
> overhead that users of this new scheduler feature should expect.
>
> Organizing it into neat before/after numbers and percentages,
> comparing it with noise (stddev) [i.e. determining that the effect we
> measure is above noise] and putting it all into the changelog would
> be the other goal of these measurements.

Hi Ingo,

I've tested pipe-test-100k in the following cases: base(no patch), with
patch but feature-disabled, with patch and several periods(quota set to
be a large value to avoid processes throttled), the result is:


cycles instructions branches
-------------------------------------------------------------------------------------------------------------------
base 7,526,317,497 8,666,579,347 1,771,078,445
+patch, cgroup not enabled 7,610,354,447 (1.12%) 8,569,448,982 (-1.12%) 1,751,675,193 (-0.11%)
+patch, 10000000000/1000(quota/period) 7,856,873,327 (4.39%) 8,822,227,540 (1.80%) 1,801,766,182 (1.73%)
+patch, 10000000000/10000(quota/period) 7,797,711,600 (3.61%) 8,754,747,746 (1.02%) 1,788,316,969 (0.97%)
+patch, 10000000000/100000(quota/period) 7,777,784,384 (3.34%) 8,744,979,688 (0.90%) 1,786,319,566 (0.86%)
+patch, 10000000000/1000000(quota/period) 7,802,382,802 (3.67%) 8,755,638,235 (1.03%) 1,788,601,070 (0.99%)
-------------------------------------------------------------------------------------------------------------------



These are the original outputs from perf.

base
--------------
Performance counter stats for './pipe-test-100k' (50 runs):

3834.623919 task-clock # 0.576 CPUs utilized ( +- 0.04% )
200,009 context-switches # 0.052 M/sec ( +- 0.00% )
0 CPU-migrations # 0.000 M/sec ( +- 48.45% )
135 page-faults # 0.000 M/sec ( +- 0.12% )
7,526,317,497 cycles # 1.963 GHz ( +- 0.07% )
2,672,526,467 stalled-cycles-frontend # 35.51% frontend cycles idle ( +- 0.14% )
1,157,897,108 stalled-cycles-backend # 15.38% backend cycles idle ( +- 0.29% )
8,666,579,347 instructions # 1.15 insns per cycle
# 0.31 stalled cycles per insn ( +- 0.04% )
1,771,078,445 branches # 461.865 M/sec ( +- 0.04% )
35,159,140 branch-misses # 1.99% of all branches ( +- 0.11% )

6.654770337 seconds time elapsed ( +- 0.02% )



+patch, cpu cgroup not enabled
------------------------------
Performance counter stats for './pipe-test-100k' (50 runs):

3872.071268 task-clock # 0.577 CPUs utilized ( +- 0.10% )
200,009 context-switches # 0.052 M/sec ( +- 0.00% )
0 CPU-migrations # 0.000 M/sec ( +- 69.99% )
135 page-faults # 0.000 M/sec ( +- 0.17% )
7,610,354,447 cycles # 1.965 GHz ( +- 0.11% )
2,792,310,881 stalled-cycles-frontend # 36.69% frontend cycles idle ( +- 0.17% )
1,268,428,999 stalled-cycles-backend # 16.67% backend cycles idle ( +- 0.33% )
8,569,448,982 instructions # 1.13 insns per cycle
# 0.33 stalled cycles per insn ( +- 0.10% )
1,751,675,193 branches # 452.387 M/sec ( +- 0.09% )
36,605,163 branch-misses # 2.09% of all branches ( +- 0.12% )

6.707220617 seconds time elapsed ( +- 0.05% )



+patch, 10000000000/1000(quota/period)
--------------------------------------
Performance counter stats for './pipe-test-100k' (50 runs):

3973.982673 task-clock # 0.583 CPUs utilized ( +- 0.09% )
200,010 context-switches # 0.050 M/sec ( +- 0.00% )
0 CPU-migrations # 0.000 M/sec ( +-100.00% )
135 page-faults # 0.000 M/sec ( +- 0.14% )
7,856,873,327 cycles # 1.977 GHz ( +- 0.10% )
2,903,700,355 stalled-cycles-frontend # 36.96% frontend cycles idle ( +- 0.14% )
1,310,151,837 stalled-cycles-backend # 16.68% backend cycles idle ( +- 0.33% )
8,822,227,540 instructions # 1.12 insns per cycle
# 0.33 stalled cycles per insn ( +- 0.08% )
1,801,766,182 branches # 453.391 M/sec ( +- 0.08% )
37,784,995 branch-misses # 2.10% of all branches ( +- 0.14% )

6.821678535 seconds time elapsed ( +- 0.05% )



+patch, 10000000000/10000(quota/period)
---------------------------------------
Performance counter stats for './pipe-test-100k' (50 runs):

3948.074074 task-clock # 0.581 CPUs utilized ( +- 0.11% )
200,009 context-switches # 0.051 M/sec ( +- 0.00% )
0 CPU-migrations # 0.000 M/sec ( +- 69.99% )
135 page-faults # 0.000 M/sec ( +- 0.20% )
7,797,711,600 cycles # 1.975 GHz ( +- 0.12% )
2,881,224,123 stalled-cycles-frontend # 36.95% frontend cycles idle ( +- 0.18% )
1,294,534,443 stalled-cycles-backend # 16.60% backend cycles idle ( +- 0.40% )
8,754,747,746 instructions # 1.12 insns per cycle
# 0.33 stalled cycles per insn ( +- 0.10% )
1,788,316,969 branches # 452.959 M/sec ( +- 0.09% )
37,619,798 branch-misses # 2.10% of all branches ( +- 0.17% )

6.792410565 seconds time elapsed ( +- 0.05% )



+patch, 10000000000/100000(quota/period)
----------------------------------------
Performance counter stats for './pipe-test-100k' (50 runs):

3943.323261 task-clock # 0.581 CPUs utilized ( +- 0.10% )
200,009 context-switches # 0.051 M/sec ( +- 0.00% )
0 CPU-migrations # 0.000 M/sec ( +- 56.54% )
135 page-faults # 0.000 M/sec ( +- 0.24% )
7,777,784,384 cycles # 1.972 GHz ( +- 0.12% )
2,869,653,004 stalled-cycles-frontend # 36.90% frontend cycles idle ( +- 0.19% )
1,278,100,561 stalled-cycles-backend # 16.43% backend cycles idle ( +- 0.37% )
8,744,979,688 instructions # 1.12 insns per cycle
# 0.33 stalled cycles per insn ( +- 0.10% )
1,786,319,566 branches # 452.999 M/sec ( +- 0.09% )
37,514,727 branch-misses # 2.10% of all branches ( +- 0.14% )

6.790280499 seconds time elapsed ( +- 0.06% )



+patch, 10000000000/1000000(quota/period)
----------------------------------------
Performance counter stats for './pipe-test-100k' (50 runs):

3951.215042 task-clock # 0.582 CPUs utilized ( +- 0.09% )
200,009 context-switches # 0.051 M/sec ( +- 0.00% )
0 CPU-migrations # 0.000 M/sec ( +- 0.00% )
135 page-faults # 0.000 M/sec ( +- 0.20% )
7,802,382,802 cycles # 1.975 GHz ( +- 0.12% )
2,884,487,463 stalled-cycles-frontend # 36.97% frontend cycles idle ( +- 0.17% )
1,297,073,308 stalled-cycles-backend # 16.62% backend cycles idle ( +- 0.35% )
8,755,638,235 instructions # 1.12 insns per cycle
# 0.33 stalled cycles per insn ( +- 0.11% )
1,788,601,070 branches # 452.671 M/sec ( +- 0.11% )
37,649,606 branch-misses # 2.10% of all branches ( +- 0.15% )

6.794033052 seconds time elapsed ( +- 0.06% )