The per-cpu preempt count of x86 contains two values, the actual preempt
count and the inverted PREEMPT_NEED_RESCHED bit. If a corrupted preempt
count is detected the preempt_count_set function is used to reset the
preempt count.
In case the inverted PREEMPT_NEED_RESCHED bit is zero at the time of the
reset, the preemption indication is lost. Use raw_cpu_cmpxchg_4 to reset
only the count part and leave the PREEMPT_NEED_RESCHED bit as it is.
Signed-off-by: Martin Schwidefsky <[email protected]>
---
arch/x86/include/asm/preempt.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 17f2186..ec1f3c6 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -24,7 +24,13 @@ static __always_inline int preempt_count(void)
static __always_inline void preempt_count_set(int pc)
{
- raw_cpu_write_4(__preempt_count, pc);
+ int old, new;
+
+ do {
+ old = raw_cpu_read_4(__preempt_count);
+ new = (old & PREEMPT_NEED_RESCHED) |
+ (pc & ~PREEMPT_NEED_RESCHED);
+ } while (raw_cpu_cmpxchg_4(__preempt_count, old, new) != old);
}
/*
--
1.9.1
On Mon, Nov 07, 2016 at 02:01:00PM +0100, Martin Schwidefsky wrote:
> The per-cpu preempt count of x86 contains two values, the actual preempt
> count and the inverted PREEMPT_NEED_RESCHED bit. If a corrupted preempt
> count is detected the preempt_count_set function is used to reset the
> preempt count.
>
> In case the inverted PREEMPT_NEED_RESCHED bit is zero at the time of the
> reset, the preemption indication is lost. Use raw_cpu_cmpxchg_4 to reset
> only the count part and leave the PREEMPT_NEED_RESCHED bit as it is.
>
> Signed-off-by: Martin Schwidefsky <[email protected]>
Thanks Martin!
I don't suppose this really hurts too much; if we hit a warn where this
restore is needed there are bigger problems, but given you've done the
patch already and it does improve consistency, I've taken it.
> ---
> arch/x86/include/asm/preempt.h | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
> index 17f2186..ec1f3c6 100644
> --- a/arch/x86/include/asm/preempt.h
> +++ b/arch/x86/include/asm/preempt.h
> @@ -24,7 +24,13 @@ static __always_inline int preempt_count(void)
>
> static __always_inline void preempt_count_set(int pc)
> {
> - raw_cpu_write_4(__preempt_count, pc);
> + int old, new;
> +
> + do {
> + old = raw_cpu_read_4(__preempt_count);
> + new = (old & PREEMPT_NEED_RESCHED) |
> + (pc & ~PREEMPT_NEED_RESCHED);
> + } while (raw_cpu_cmpxchg_4(__preempt_count, old, new) != old);
> }
>
> /*
> --
> 1.9.1
>
Commit-ID: f285144f81e814f39342dbf5321d6ba939890b1b
Gitweb: http://git.kernel.org/tip/f285144f81e814f39342dbf5321d6ba939890b1b
Author: Martin Schwidefsky <[email protected]>
AuthorDate: Mon, 7 Nov 2016 14:01:00 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 16 Nov 2016 10:29:04 +0100
sched/x86: Do not clear PREEMPT_NEED_RESCHED on preempt count reset
The per-cpu preempt count of x86 contains two values, the actual preempt
count and the inverted PREEMPT_NEED_RESCHED bit. If a corrupted preempt
count is detected the preempt_count_set() function is used to reset the
preempt count.
In case the inverted PREEMPT_NEED_RESCHED bit is zero at the time of the
reset, the preemption indication is lost. Use raw_cpu_cmpxchg_4() to reset
only the count part and leave the PREEMPT_NEED_RESCHED bit as it is.
This improves the kernel's behavior when it runs into preempt count leaks
and tries to fix them up.
Signed-off-by: Martin Schwidefsky <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/include/asm/preempt.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
index 17f2186..ec1f3c6 100644
--- a/arch/x86/include/asm/preempt.h
+++ b/arch/x86/include/asm/preempt.h
@@ -24,7 +24,13 @@ static __always_inline int preempt_count(void)
static __always_inline void preempt_count_set(int pc)
{
- raw_cpu_write_4(__preempt_count, pc);
+ int old, new;
+
+ do {
+ old = raw_cpu_read_4(__preempt_count);
+ new = (old & PREEMPT_NEED_RESCHED) |
+ (pc & ~PREEMPT_NEED_RESCHED);
+ } while (raw_cpu_cmpxchg_4(__preempt_count, old, new) != old);
}
/*