2019-11-27 08:43:01

by Zhenzhong Duan

[permalink] [raw]
Subject: [PATCH] sched/clock: use static_branch_likely() check at sched_clock_running

sched_clock_running is enabled early at bootup stage and never
disabled. So hints that to compiler by using static_branch_likely()
rather than static_branch_unlikely().

Fixes: 46457ea464f5 ("sched/clock: Use static key for sched_clock_running")
Signed-off-by: Zhenzhong Duan <[email protected]>
---
kernel/sched/clock.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index 1152259..12bca64 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -370,7 +370,7 @@ u64 sched_clock_cpu(int cpu)
if (sched_clock_stable())
return sched_clock() + __sched_clock_offset;

- if (!static_branch_unlikely(&sched_clock_running))
+ if (!static_branch_likely(&sched_clock_running))
return sched_clock();

preempt_disable_notrace();
@@ -393,7 +393,7 @@ void sched_clock_tick(void)
if (sched_clock_stable())
return;

- if (!static_branch_unlikely(&sched_clock_running))
+ if (!static_branch_likely(&sched_clock_running))
return;

lockdep_assert_irqs_disabled();
@@ -460,7 +460,7 @@ void __init sched_clock_init(void)

u64 sched_clock_cpu(int cpu)
{
- if (!static_branch_unlikely(&sched_clock_running))
+ if (!static_branch_likely(&sched_clock_running))
return 0;

return sched_clock();
--
1.8.3.1


2019-11-27 15:17:17

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] sched/clock: use static_branch_likely() check at sched_clock_running

On Wed, 27 Nov 2019 16:37:28 +0800
Zhenzhong Duan <[email protected]> wrote:

> sched_clock_running is enabled early at bootup stage and never
> disabled. So hints that to compiler by using static_branch_likely()
> rather than static_branch_unlikely().

Looks like the confusion was the moving of the "!":

- if (unlikely(!sched_clock_running))
+ if (!static_branch_unlikely(&sched_clock_running))

Where, it was unlikely that !sched_clock_running would be true, but
because the "!" was moved outside the "unlikely()" it makes the test
"likely()". That is, if we added an intermediate step, it would have
been:

if (!likely(sched_clock_running))

which would have prevented the mistake that this patch fixes.

Reviewed-by: Steven Rostedt (VMware) <[email protected]>

-- Steve

>
> Fixes: 46457ea464f5 ("sched/clock: Use static key for sched_clock_running")
> Signed-off-by: Zhenzhong Duan <[email protected]>
> ---
> kernel/sched/clock.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> index 1152259..12bca64 100644
> --- a/kernel/sched/clock.c
> +++ b/kernel/sched/clock.c
> @@ -370,7 +370,7 @@ u64 sched_clock_cpu(int cpu)
> if (sched_clock_stable())
> return sched_clock() + __sched_clock_offset;
>
> - if (!static_branch_unlikely(&sched_clock_running))
> + if (!static_branch_likely(&sched_clock_running))
> return sched_clock();
>
> preempt_disable_notrace();
> @@ -393,7 +393,7 @@ void sched_clock_tick(void)
> if (sched_clock_stable())
> return;
>
> - if (!static_branch_unlikely(&sched_clock_running))
> + if (!static_branch_likely(&sched_clock_running))
> return;
>
> lockdep_assert_irqs_disabled();
> @@ -460,7 +460,7 @@ void __init sched_clock_init(void)
>
> u64 sched_clock_cpu(int cpu)
> {
> - if (!static_branch_unlikely(&sched_clock_running))
> + if (!static_branch_likely(&sched_clock_running))
> return 0;
>
> return sched_clock();

Subject: [tip: sched/urgent] sched/clock: Use static_branch_likely() with sched_clock_running

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID: c5105d764e0214bcc4c6d40d7ba231d01b2e9dda
Gitweb: https://git.kernel.org/tip/c5105d764e0214bcc4c6d40d7ba231d01b2e9dda
Author: Zhenzhong Duan <[email protected]>
AuthorDate: Wed, 27 Nov 2019 16:37:28 +08:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Fri, 29 Nov 2019 08:10:54 +01:00

sched/clock: Use static_branch_likely() with sched_clock_running

sched_clock_running is enabled early at bootup stage and never
disabled. So hint that to the compiler by using static_branch_likely()
rather than static_branch_unlikely().

The branch probability mis-annotation was introduced in the original
commit that converted the plain sched_clock_running flag to a static key:

46457ea464f5 ("sched/clock: Use static key for sched_clock_running")

Steve further notes:

| Looks like the confusion was the moving of the "!":
|
| - if (unlikely(!sched_clock_running))
| + if (!static_branch_unlikely(&sched_clock_running))
|
| Where, it was unlikely that !sched_clock_running would be true, but
| because the "!" was moved outside the "unlikely()" it makes the test
| "likely()". That is, if we added an intermediate step, it would have
| been:
|
| if (!likely(sched_clock_running))
|
| which would have prevented the mistake that this patch fixes.

[ mingo: Edited the changelog. ]

Signed-off-by: Zhenzhong Duan <[email protected]>
Reviewed-by: Steven Rostedt (VMware) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/clock.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index 1152259..12bca64 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -370,7 +370,7 @@ u64 sched_clock_cpu(int cpu)
if (sched_clock_stable())
return sched_clock() + __sched_clock_offset;

- if (!static_branch_unlikely(&sched_clock_running))
+ if (!static_branch_likely(&sched_clock_running))
return sched_clock();

preempt_disable_notrace();
@@ -393,7 +393,7 @@ void sched_clock_tick(void)
if (sched_clock_stable())
return;

- if (!static_branch_unlikely(&sched_clock_running))
+ if (!static_branch_likely(&sched_clock_running))
return;

lockdep_assert_irqs_disabled();
@@ -460,7 +460,7 @@ void __init sched_clock_init(void)

u64 sched_clock_cpu(int cpu)
{
- if (!static_branch_unlikely(&sched_clock_running))
+ if (!static_branch_likely(&sched_clock_running))
return 0;

return sched_clock();