2020-09-01 16:24:13

by Nadav Amit

[permalink] [raw]
Subject: [PATCH] x86/special_insn: reverse __force_order logic

From: Nadav Amit <[email protected]>

The __force_order logic seems to be inverted. __force_order is
supposedly used to manipulate the compiler to use its memory
dependencies analysis to enforce orders between CR writes and reads.
Therefore, the memory should behave as a "CR": when the CR is read, the
memory should be "read" by the inline assembly, and __force_order should
be an output. When the CR is written, the memory should be "written".

This change should allow to remove the "volatile" qualifier from CR
reads at a later patch.

While at it, remove the extra new-line from the inline assembly, as it
only confuses GCC when it estimates the cost of the inline assembly.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: [email protected]
Cc: "H. Peter Anvin" <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: [email protected]
Signed-off-by: Nadav Amit <[email protected]>

---

Unless I misunderstand the logic, __force_order should also be used by
rdpkru() and wrpkru() which do not have dependency on __force_order. I
also did not understand why native_write_cr0() has R/W dependency on
__force_order, and why native_write_cr4() no longer has any dependency
on __force_order.
---
arch/x86/include/asm/special_insns.h | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 5999b0b3dd4a..dff5e5b01a3c 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -24,32 +24,32 @@ void native_write_cr0(unsigned long val);
static inline unsigned long native_read_cr0(void)
{
unsigned long val;
- asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr0,%0" : "=r" (val) : "m" (__force_order));
return val;
}

static __always_inline unsigned long native_read_cr2(void)
{
unsigned long val;
- asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr2,%0" : "=r" (val) : "m" (__force_order));
return val;
}

static __always_inline void native_write_cr2(unsigned long val)
{
- asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
+ asm volatile("mov %1,%%cr2" : "=m" (__force_order) : "r" (val));
}

static inline unsigned long __native_read_cr3(void)
{
unsigned long val;
- asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr3,%0" : "=r" (val) : "m" (__force_order));
return val;
}

static inline void native_write_cr3(unsigned long val)
{
- asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
+ asm volatile("mov %1,%%cr3" : "=m" (__force_order) : "r" (val));
}

static inline unsigned long native_read_cr4(void)
@@ -64,10 +64,10 @@ static inline unsigned long native_read_cr4(void)
asm volatile("1: mov %%cr4, %0\n"
"2:\n"
_ASM_EXTABLE(1b, 2b)
- : "=r" (val), "=m" (__force_order) : "0" (0));
+ : "=r" (val) : "m" (__force_order), "0" (0));
#else
/* CR4 always exists on x86_64. */
- asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr4,%0" : "=r" (val) : "m" (__force_order));
#endif
return val;
}
--
2.25.1


2020-09-02 12:48:18

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH] x86/special_insn: reverse __force_order logic

On September 1, 2020 9:18:57 AM PDT, Nadav Amit <[email protected]> wrote:
>From: Nadav Amit <[email protected]>
>
>The __force_order logic seems to be inverted. __force_order is
>supposedly used to manipulate the compiler to use its memory
>dependencies analysis to enforce orders between CR writes and reads.
>Therefore, the memory should behave as a "CR": when the CR is read, the
>memory should be "read" by the inline assembly, and __force_order
>should
>be an output. When the CR is written, the memory should be "written".
>
>This change should allow to remove the "volatile" qualifier from CR
>reads at a later patch.
>
>While at it, remove the extra new-line from the inline assembly, as it
>only confuses GCC when it estimates the cost of the inline assembly.
>
>Cc: Thomas Gleixner <[email protected]>
>Cc: Ingo Molnar <[email protected]>
>Cc: Borislav Petkov <[email protected]>
>Cc: [email protected]
>Cc: "H. Peter Anvin" <[email protected]>
>Cc: Peter Zijlstra <[email protected]>
>Cc: Andy Lutomirski <[email protected]>
>Cc: Kees Cook <[email protected]>
>Cc: [email protected]
>Signed-off-by: Nadav Amit <[email protected]>
>
>---
>
>Unless I misunderstand the logic, __force_order should also be used by
>rdpkru() and wrpkru() which do not have dependency on __force_order. I
>also did not understand why native_write_cr0() has R/W dependency on
>__force_order, and why native_write_cr4() no longer has any dependency
>on __force_order.
>---
> arch/x86/include/asm/special_insns.h | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
>diff --git a/arch/x86/include/asm/special_insns.h
>b/arch/x86/include/asm/special_insns.h
>index 5999b0b3dd4a..dff5e5b01a3c 100644
>--- a/arch/x86/include/asm/special_insns.h
>+++ b/arch/x86/include/asm/special_insns.h
>@@ -24,32 +24,32 @@ void native_write_cr0(unsigned long val);
> static inline unsigned long native_read_cr0(void)
> {
> unsigned long val;
>- asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
>+ asm volatile("mov %%cr0,%0" : "=r" (val) : "m" (__force_order));
> return val;
> }
>
> static __always_inline unsigned long native_read_cr2(void)
> {
> unsigned long val;
>- asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
>+ asm volatile("mov %%cr2,%0" : "=r" (val) : "m" (__force_order));
> return val;
> }
>
> static __always_inline void native_write_cr2(unsigned long val)
> {
>- asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
>+ asm volatile("mov %1,%%cr2" : "=m" (__force_order) : "r" (val));
> }
>
> static inline unsigned long __native_read_cr3(void)
> {
> unsigned long val;
>- asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
>+ asm volatile("mov %%cr3,%0" : "=r" (val) : "m" (__force_order));
> return val;
> }
>
> static inline void native_write_cr3(unsigned long val)
> {
>- asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
>+ asm volatile("mov %1,%%cr3" : "=m" (__force_order) : "r" (val));
> }
>
> static inline unsigned long native_read_cr4(void)
>@@ -64,10 +64,10 @@ static inline unsigned long native_read_cr4(void)
> asm volatile("1: mov %%cr4, %0\n"
> "2:\n"
> _ASM_EXTABLE(1b, 2b)
>- : "=r" (val), "=m" (__force_order) : "0" (0));
>+ : "=r" (val) : "m" (__force_order), "0" (0));
> #else
> /* CR4 always exists on x86_64. */
>- asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
>+ asm volatile("mov %%cr4,%0" : "=r" (val) : "m" (__force_order));
> #endif
> return val;
> }

This seems reasonable to me, and I fully agree with you that the logic seems to be just plain wrong (and unnoticed since all volatile operations are strictly ordered anyway), but you better not remove the volatile from cr2 read... unlike the other CRs that one is written by hardware and so genuinely is volatile.

However, I do believe "=m" is at least theoretically wrong. You are only writing *part* of the total state represented by this token (you can think of it as equivalent to a bitop), so it should be "+m".
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

2020-09-02 12:55:46

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] x86/special_insn: reverse __force_order logic

On Tue, Sep 01, 2020 at 09:18:57AM -0700, Nadav Amit wrote:

> Unless I misunderstand the logic, __force_order should also be used by
> rdpkru() and wrpkru() which do not have dependency on __force_order. I
> also did not understand why native_write_cr0() has R/W dependency on
> __force_order, and why native_write_cr4() no longer has any dependency
> on __force_order.

There was a fairly large thread about this thing here:

https://lkml.kernel.org/r/[email protected]

I didn't keep up, but I think the general concensus was that it's
indeed a bit naf.

2020-09-02 15:38:04

by Nadav Amit

[permalink] [raw]
Subject: Re: [PATCH] x86/special_insn: reverse __force_order logic

> On Sep 2, 2020, at 5:54 AM, [email protected] wrote:
>
> On Tue, Sep 01, 2020 at 09:18:57AM -0700, Nadav Amit wrote:
>
>> Unless I misunderstand the logic, __force_order should also be used by
>> rdpkru() and wrpkru() which do not have dependency on __force_order. I
>> also did not understand why native_write_cr0() has R/W dependency on
>> __force_order, and why native_write_cr4() no longer has any dependency
>> on __force_order.
>
> There was a fairly large thread about this thing here:
>
> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Flkml.kernel.org%2Fr%2F20200527135329.1172644-1-arnd%40arndb.de&amp;data=02%7C01%7Cnamit%40vmware.com%7C387b68745a984454d50708d84f3f499c%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637346480527901622&amp;sdata=PR%2BCUy%2FYNKza6he78coaDR3x1G7BYuzFfS9SGfWZ9p8%3D&amp;reserved=0
>
> I didn't keep up, but I think the general concensus was that it's
> indeed a bit naf.

Thanks for pointer. I did not see the discussion, and embarrassingly, I have
also never figured out how to reply on lkml emails without registering to
lkml.

Anyhow, indeed, the patch that Arvind provided seems to address this issue.

2020-09-02 16:58:26

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] x86/special_insn: reverse __force_order logic

On Wed, Sep 02, 2020 at 03:32:18PM +0000, Nadav Amit wrote:

> Thanks for pointer. I did not see the discussion, and embarrassingly, I have
> also never figured out how to reply on lkml emails without registering to
> lkml.

The lore.kernel.org thing I pointed you to allows you to download an
mbox with the email in (see the very bottom of the page). Use your
favorite MUA to open it and reply :-)

2020-09-02 17:04:16

by Nadav Amit

[permalink] [raw]
Subject: Re: [PATCH] x86/special_insn: reverse __force_order logic

> On Sep 2, 2020, at 9:56 AM, [email protected] wrote:
>
> On Wed, Sep 02, 2020 at 03:32:18PM +0000, Nadav Amit wrote:
>
>> Thanks for pointer. I did not see the discussion, and embarrassingly, I have
>> also never figured out how to reply on lkml emails without registering to
>> lkml.
>
> The lore.kernel.org thing I pointed you to allows you to download an
> mbox with the email in (see the very bottom of the page). Use your
> favorite MUA to open it and reply :-)

Thanks!