2016-03-21 23:20:20

by Davidlohr Bueso

[permalink] [raw]
Subject: [PATCH -tip] x86/mce: Use atomic_inc_return barrier when starting monad sync

mce_start() has an explicit smp_wmb to serialize writes to
global_nwo and mce_callin. However, atomic_inc_return() implies
barriers on both sides of the call, as such simply rely on this
full smp barrier.

Signed-off-by: Davidlohr Bueso <[email protected]>
---
arch/x86/kernel/cpu/mcheck/mce.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index f0c921b03e42..6b7039c166b8 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -830,9 +830,9 @@ static int mce_start(int *no_way_out)

atomic_add(*no_way_out, &global_nwo);
/*
- * global_nwo should be updated before mce_callin
+ * Rely on the implied barrier below, such that global_nwo
+ * is updated before mce_callin.
*/
- smp_wmb();
order = atomic_inc_return(&mce_callin);

/*
--
2.1.4


2016-03-23 08:30:29

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH -tip] x86/mce: Use atomic_inc_return barrier when starting monad sync

On Mon, Mar 21, 2016 at 04:19:56PM -0700, Davidlohr Bueso wrote:
> mce_start() has an explicit smp_wmb to serialize writes to
> global_nwo and mce_callin. However, atomic_inc_return() implies
> barriers on both sides of the call, as such simply rely on this
> full smp barrier.
>
> Signed-off-by: Davidlohr Bueso <[email protected]>
> ---
> arch/x86/kernel/cpu/mcheck/mce.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)

Applied, thanks.

--
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.