2021-03-04 18:54:27

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH v2] sched: Optimize __calc_delta.

On Wed, Mar 3, 2021 at 2:48 PM Josh Don <[email protected]> wrote:
>
> From: Clement Courbet <[email protected]>
>
> A significant portion of __calc_delta time is spent in the loop
> shifting a u64 by 32 bits. Use `fls` instead of iterating.
>
> This is ~7x faster on benchmarks.
>
> The generic `fls` implementation (`generic_fls`) is still ~4x faster
> than the loop.
> Architectures that have a better implementation will make use of it. For
> example, on X86 we get an additional factor 2 in speed without dedicated
> implementation.
>
> On gcc, the asm versions of `fls` are about the same speed as the
> builtin. On clang, the versions that use fls are more than twice as
> slow as the builtin. This is because the way the `fls` function is
> written, clang puts the value in memory:
> https://godbolt.org/z/EfMbYe. This bug is filed at
> https://bugs.llvm.org/show_bug.cgi?id=49406.

Hi Josh, Thanks for helping get this patch across the finish line.
Would you mind updating the commit message to point to
https://bugs.llvm.org/show_bug.cgi?id=20197?

>
> ```
> name cpu/op
> BM_Calc<__calc_delta_loop> 9.57ms ±12%
> BM_Calc<__calc_delta_generic_fls> 2.36ms ±13%
> BM_Calc<__calc_delta_asm_fls> 2.45ms ±13%
> BM_Calc<__calc_delta_asm_fls_nomem> 1.66ms ±12%
> BM_Calc<__calc_delta_asm_fls64> 2.46ms ±13%
> BM_Calc<__calc_delta_asm_fls64_nomem> 1.34ms ±15%
> BM_Calc<__calc_delta_builtin> 1.32ms ±11%
> ```
>
> Signed-off-by: Clement Courbet <[email protected]>
> Signed-off-by: Josh Don <[email protected]>
> ---
> kernel/sched/fair.c | 19 +++++++++++--------
> kernel/sched/sched.h | 1 +
> 2 files changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8a8bd7b13634..a691371960ae 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
> static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
> {
> u64 fact = scale_load_down(weight);
> + u32 fact_hi = (u32)(fact >> 32);
> int shift = WMULT_SHIFT;
> + int fs;
>
> __update_inv_weight(lw);
>
> - if (unlikely(fact >> 32)) {
> - while (fact >> 32) {
> - fact >>= 1;
> - shift--;
> - }
> + if (unlikely(fact_hi)) {
> + fs = fls(fact_hi);
> + shift -= fs;
> + fact >>= fs;
> }
>
> fact = mul_u32_u32(fact, lw->inv_weight);
>
> - while (fact >> 32) {
> - fact >>= 1;
> - shift--;
> + fact_hi = (u32)(fact >> 32);
> + if (fact_hi) {
> + fs = fls(fact_hi);
> + shift -= fs;
> + fact >>= fs;
> }
>
> return mul_u64_u32_shr(delta_exec, fact, shift);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 10a1522b1e30..714af71cf983 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -36,6 +36,7 @@
> #include <uapi/linux/sched/types.h>
>
> #include <linux/binfmts.h>
> +#include <linux/bitops.h>

This hunk of the patch is curious. I assume that bitops.h is needed
for fls(); if so, why not #include it in kernel/sched/fair.c?
Otherwise this potentially hurts compile time for all TUs that include
kernel/sched/sched.h.

> #include <linux/blkdev.h>
> #include <linux/compat.h>
> #include <linux/context_tracking.h>
> --
> 2.30.1.766.gb4fecdf3b7-goog
>


--
Thanks,
~Nick Desaulniers


2021-03-04 19:09:46

by Sedat Dilek

[permalink] [raw]
Subject: Re: [PATCH v2] sched: Optimize __calc_delta.

On Thu, Mar 4, 2021 at 6:34 PM 'Nick Desaulniers' via Clang Built
Linux <[email protected]> wrote:
>
> On Wed, Mar 3, 2021 at 2:48 PM Josh Don <[email protected]> wrote:
> >
> > From: Clement Courbet <[email protected]>
> >
> > A significant portion of __calc_delta time is spent in the loop
> > shifting a u64 by 32 bits. Use `fls` instead of iterating.
> >
> > This is ~7x faster on benchmarks.
> >
> > The generic `fls` implementation (`generic_fls`) is still ~4x faster
> > than the loop.
> > Architectures that have a better implementation will make use of it. For
> > example, on X86 we get an additional factor 2 in speed without dedicated
> > implementation.
> >
> > On gcc, the asm versions of `fls` are about the same speed as the
> > builtin. On clang, the versions that use fls are more than twice as
> > slow as the builtin. This is because the way the `fls` function is
> > written, clang puts the value in memory:
> > https://godbolt.org/z/EfMbYe. This bug is filed at
> > https://bugs.llvm.org/show_bug.cgi?id=49406.
>
> Hi Josh, Thanks for helping get this patch across the finish line.
> Would you mind updating the commit message to point to
> https://bugs.llvm.org/show_bug.cgi?id=20197?
>
> >
> > ```
> > name cpu/op
> > BM_Calc<__calc_delta_loop> 9.57ms ±12%
> > BM_Calc<__calc_delta_generic_fls> 2.36ms ±13%
> > BM_Calc<__calc_delta_asm_fls> 2.45ms ±13%
> > BM_Calc<__calc_delta_asm_fls_nomem> 1.66ms ±12%
> > BM_Calc<__calc_delta_asm_fls64> 2.46ms ±13%
> > BM_Calc<__calc_delta_asm_fls64_nomem> 1.34ms ±15%
> > BM_Calc<__calc_delta_builtin> 1.32ms ±11%
> > ```
> >
> > Signed-off-by: Clement Courbet <[email protected]>
> > Signed-off-by: Josh Don <[email protected]>
> > ---
> > kernel/sched/fair.c | 19 +++++++++++--------
> > kernel/sched/sched.h | 1 +
> > 2 files changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a8bd7b13634..a691371960ae 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
> > static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
> > {
> > u64 fact = scale_load_down(weight);
> > + u32 fact_hi = (u32)(fact >> 32);
> > int shift = WMULT_SHIFT;
> > + int fs;
> >
> > __update_inv_weight(lw);
> >
> > - if (unlikely(fact >> 32)) {
> > - while (fact >> 32) {
> > - fact >>= 1;
> > - shift--;
> > - }
> > + if (unlikely(fact_hi)) {
> > + fs = fls(fact_hi);
> > + shift -= fs;
> > + fact >>= fs;
> > }
> >
> > fact = mul_u32_u32(fact, lw->inv_weight);
> >
> > - while (fact >> 32) {
> > - fact >>= 1;
> > - shift--;
> > + fact_hi = (u32)(fact >> 32);
> > + if (fact_hi) {
> > + fs = fls(fact_hi);
> > + shift -= fs;
> > + fact >>= fs;
> > }
> >
> > return mul_u64_u32_shr(delta_exec, fact, shift);
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 10a1522b1e30..714af71cf983 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -36,6 +36,7 @@
> > #include <uapi/linux/sched/types.h>
> >
> > #include <linux/binfmts.h>
> > +#include <linux/bitops.h>
>
> This hunk of the patch is curious. I assume that bitops.h is needed
> for fls(); if so, why not #include it in kernel/sched/fair.c?
> Otherwise this potentially hurts compile time for all TUs that include
> kernel/sched/sched.h.
>

I have v2 as-is in my custom patchset and booted right now on bare metal.

As Nick points out moving the include makes sense to me.
We have a lot of include at the wrong places increasing build-time.

- Sedat -

> > #include <linux/blkdev.h>
> > #include <linux/compat.h>
> > #include <linux/context_tracking.h>
> > --
> > 2.30.1.766.gb4fecdf3b7-goog
> >
>
>
> --
> Thanks,
> ~Nick Desaulniers
>
> --
> You received this message because you are subscribed to the Google Groups "Clang Built Linux" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/clang-built-linux/CAKwvOdmijctJfM3gNfwEVjaQyp3LZkhnAwgsT7EBhsSBJyfLAA%40mail.gmail.com.

2021-03-04 19:26:23

by Sedat Dilek

[permalink] [raw]
Subject: Re: [PATCH v2] sched: Optimize __calc_delta.

On Thu, Mar 4, 2021 at 7:24 PM Sedat Dilek <[email protected]> wrote:
>
> On Thu, Mar 4, 2021 at 6:34 PM 'Nick Desaulniers' via Clang Built
> Linux <[email protected]> wrote:
> >
> > On Wed, Mar 3, 2021 at 2:48 PM Josh Don <[email protected]> wrote:
> > >
> > > From: Clement Courbet <[email protected]>
> > >
> > > A significant portion of __calc_delta time is spent in the loop
> > > shifting a u64 by 32 bits. Use `fls` instead of iterating.
> > >
> > > This is ~7x faster on benchmarks.
> > >
> > > The generic `fls` implementation (`generic_fls`) is still ~4x faster
> > > than the loop.
> > > Architectures that have a better implementation will make use of it. For
> > > example, on X86 we get an additional factor 2 in speed without dedicated
> > > implementation.
> > >
> > > On gcc, the asm versions of `fls` are about the same speed as the
> > > builtin. On clang, the versions that use fls are more than twice as
> > > slow as the builtin. This is because the way the `fls` function is
> > > written, clang puts the value in memory:
> > > https://godbolt.org/z/EfMbYe. This bug is filed at
> > > https://bugs.llvm.org/show_bug.cgi?id=49406.
> >
> > Hi Josh, Thanks for helping get this patch across the finish line.
> > Would you mind updating the commit message to point to
> > https://bugs.llvm.org/show_bug.cgi?id=20197?
> >
> > >
> > > ```
> > > name cpu/op
> > > BM_Calc<__calc_delta_loop> 9.57ms ±12%
> > > BM_Calc<__calc_delta_generic_fls> 2.36ms ±13%
> > > BM_Calc<__calc_delta_asm_fls> 2.45ms ±13%
> > > BM_Calc<__calc_delta_asm_fls_nomem> 1.66ms ±12%
> > > BM_Calc<__calc_delta_asm_fls64> 2.46ms ±13%
> > > BM_Calc<__calc_delta_asm_fls64_nomem> 1.34ms ±15%
> > > BM_Calc<__calc_delta_builtin> 1.32ms ±11%
> > > ```
> > >
> > > Signed-off-by: Clement Courbet <[email protected]>
> > > Signed-off-by: Josh Don <[email protected]>
> > > ---
> > > kernel/sched/fair.c | 19 +++++++++++--------
> > > kernel/sched/sched.h | 1 +
> > > 2 files changed, 12 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 8a8bd7b13634..a691371960ae 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -229,22 +229,25 @@ static void __update_inv_weight(struct load_weight *lw)
> > > static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw)
> > > {
> > > u64 fact = scale_load_down(weight);
> > > + u32 fact_hi = (u32)(fact >> 32);
> > > int shift = WMULT_SHIFT;
> > > + int fs;
> > >
> > > __update_inv_weight(lw);
> > >
> > > - if (unlikely(fact >> 32)) {
> > > - while (fact >> 32) {
> > > - fact >>= 1;
> > > - shift--;
> > > - }
> > > + if (unlikely(fact_hi)) {
> > > + fs = fls(fact_hi);
> > > + shift -= fs;
> > > + fact >>= fs;
> > > }
> > >
> > > fact = mul_u32_u32(fact, lw->inv_weight);
> > >
> > > - while (fact >> 32) {
> > > - fact >>= 1;
> > > - shift--;
> > > + fact_hi = (u32)(fact >> 32);
> > > + if (fact_hi) {
> > > + fs = fls(fact_hi);
> > > + shift -= fs;
> > > + fact >>= fs;
> > > }
> > >
> > > return mul_u64_u32_shr(delta_exec, fact, shift);
> > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > > index 10a1522b1e30..714af71cf983 100644
> > > --- a/kernel/sched/sched.h
> > > +++ b/kernel/sched/sched.h
> > > @@ -36,6 +36,7 @@
> > > #include <uapi/linux/sched/types.h>
> > >
> > > #include <linux/binfmts.h>
> > > +#include <linux/bitops.h>
> >
> > This hunk of the patch is curious. I assume that bitops.h is needed
> > for fls(); if so, why not #include it in kernel/sched/fair.c?
> > Otherwise this potentially hurts compile time for all TUs that include
> > kernel/sched/sched.h.
> >
>
> I have v2 as-is in my custom patchset and booted right now on bare metal.
>
> As Nick points out moving the include makes sense to me.
> We have a lot of include at the wrong places increasing build-time.
>

I tried with the attached patch.

$ LC_ALL=C ll kernel/sched/fair.o
-rw-r--r-- 1 dileks dileks 1.2M Mar 4 20:11 kernel/sched/fair.o

- Sedat -


Attachments:
0001-sched-fair-Move-include-after-__calc_delta-optimizat.patch (1.09 kB)

2021-03-05 01:05:35

by Josh Don

[permalink] [raw]
Subject: Re: [PATCH v2] sched: Optimize __calc_delta.

On Thu, Mar 4, 2021 at 9:34 AM Nick Desaulniers <[email protected]> wrote:
>
>
> Hi Josh, Thanks for helping get this patch across the finish line.
> Would you mind updating the commit message to point to
> https://bugs.llvm.org/show_bug.cgi?id=20197?

Sure thing, just saw that it got marked as a dup.

Peter, since you've already pulled the patch, can you modify the
commit message directly? Nick also recommended dropping the
punctuation in the commit oneline.

> > #include <linux/binfmts.h>
> > +#include <linux/bitops.h>
>
> This hunk of the patch is curious. I assume that bitops.h is needed
> for fls(); if so, why not #include it in kernel/sched/fair.c?
> Otherwise this potentially hurts compile time for all TUs that include
> kernel/sched/sched.h.

bitops.h is already included in sched.h via another include, so this
was just meant to make it more explicit. Motivation for putting it
here vs. fair.c was 325ea10c080940.

2021-03-05 17:15:34

by David Laight

[permalink] [raw]
Subject: RE: [PATCH v2] sched: Optimize __calc_delta.

> Hi Josh, Thanks for helping get this patch across the finish line.
> Would you mind updating the commit message to point to
> https://bugs.llvm.org/show_bug.cgi?id=20197?

Is it worth an audit of all the asm() constraints
and potentially changing all the "mr" to "r" for clang?

The explicit 'load into a register' won't make much
difference even if a direct "m" operand could be used on x86.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)