2020-07-16 19:40:17

by Palmer Dabbelt

[permalink] [raw]
Subject: [PATCH] powerpc/64: Fix an out of date comment about MMIO ordering

From: Palmer Dabbelt <[email protected]>

This primitive has been renamed, but because it was spelled incorrectly in the
first place it must have escaped the fixup patch. As far as I can tell this
logic is still correct: smp_mb__after_spinlock() uses the default smp_mb()
implementation, which is "sync" rather than "hwsync" but those are the same
(though I'm not that familiar with PowerPC).

Signed-off-by: Palmer Dabbelt <[email protected]>
---
arch/powerpc/kernel/entry_64.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index b3c9f15089b6..7b38b4daca93 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -357,7 +357,7 @@ _GLOBAL(_switch)
* kernel/sched/core.c).
*
* Uncacheable stores in the case of involuntary preemption must
- * be taken care of. The smp_mb__before_spin_lock() in __schedule()
+ * be taken care of. The smp_mb__after_spinlock() in __schedule()
* is implemented as hwsync on powerpc, which orders MMIO too. So
* long as there is an hwsync in the context switch path, it will
* be executed on the source CPU after the task has performed
--
2.28.0.rc0.105.gf9edc3c819-goog


2020-07-16 23:11:08

by Benjamin Herrenschmidt

[permalink] [raw]
Subject: Re: [PATCH] powerpc/64: Fix an out of date comment about MMIO ordering

On Thu, 2020-07-16 at 12:38 -0700, Palmer Dabbelt wrote:
> From: Palmer Dabbelt <[email protected]>
>
> This primitive has been renamed, but because it was spelled incorrectly in the
> first place it must have escaped the fixup patch. As far as I can tell this
> logic is still correct: smp_mb__after_spinlock() uses the default smp_mb()
> implementation, which is "sync" rather than "hwsync" but those are the same
> (though I'm not that familiar with PowerPC).

Typo ? That must be me ... :)

Looks fine. Yes, sync and hwsync are the same (by opposition to lwsync
which is lighter weight and doesn't order cache inhibited).

Cheers,
Ben.

> Signed-off-by: Palmer Dabbelt <[email protected]>
> ---
> arch/powerpc/kernel/entry_64.S | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index b3c9f15089b6..7b38b4daca93 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -357,7 +357,7 @@ _GLOBAL(_switch)
> * kernel/sched/core.c).
> *
> * Uncacheable stores in the case of involuntary preemption must
> - * be taken care of. The smp_mb__before_spin_lock() in __schedule()
> + * be taken care of. The smp_mb__after_spinlock() in __schedule()
> * is implemented as hwsync on powerpc, which orders MMIO too. So
> * long as there is an hwsync in the context switch path, it will
> * be executed on the source CPU after the task has performed

2020-07-24 13:26:59

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH] powerpc/64: Fix an out of date comment about MMIO ordering

On Thu, 16 Jul 2020 12:38:20 -0700, Palmer Dabbelt wrote:
> This primitive has been renamed, but because it was spelled incorrectly in the
> first place it must have escaped the fixup patch. As far as I can tell this
> logic is still correct: smp_mb__after_spinlock() uses the default smp_mb()
> implementation, which is "sync" rather than "hwsync" but those are the same
> (though I'm not that familiar with PowerPC).

Applied to powerpc/next.

[1/1] powerpc/64: Fix an out of date comment about MMIO ordering
https://git.kernel.org/powerpc/c/147c13413c04bc6a2bd76f2503402905e5e98cff

cheers