2022-04-02 16:59:02

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect

Hi Jeremy,

Thanks for raising this.

On Fri, Apr 01, 2022 at 11:44:06AM -0500, Jeremy Linton wrote:
> The relaxed variants of read/write macros are only declared
> as `asm volatile()` which forces the compiler to generate the
> instruction in the code path as intended. The only problem
> is that it doesn't also tell the compiler that there may
> be memory side effects. Meaning that if a function is comprised
> entirely of relaxed io operations, the compiler may think that
> it only has register side effects and doesn't need to be called.

As I mentioned on a private mail, I don't think that reasoning above is
correct, and I think this is a miscompilation (i.e. a compiler bug).

The important thing is that any `asm volatile` may have a side effects
generally outside of memory or GPRs, and whether the assembly contains a memory
load/store is immaterial. We should not need to add a memory clobber in order
to retain the volatile semantic.

See:

https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Volatile

... and consider the x86 example that reads rdtsc, or an arm64 sequence like:

| void do_sysreg_thing(void)
| {
| unsigned long tmp;
|
| tmp = read_sysreg(some_reg);
| tmp |= SOME_BIT;
| write_sysreg(some_reg);
| }

... where there's no memory that we should need to hazard against.

This patch might workaround the issue, but I don't believe it is a correct fix.

> For an example function look at bcmgenet_enable_dma(), before the
> relaxed variants were removed. When built with gcc12 the code
> contains the asm blocks as expected, but then the function is
> never called.

So it sounds like this is a regression in GCC 12, which IIUC isn't released yet
per:

https://gcc.gnu.org/gcc-12/changes.html

... which says:

| Note: GCC 12 has not been released yet

Surely we can fix it prior to release?

Thanks,
Mark.

>
> Signed-off-by: Jeremy Linton <[email protected]>
> ---
> arch/arm64/include/asm/io.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
> index 7fd836bea7eb..3cceda7948a0 100644
> --- a/arch/arm64/include/asm/io.h
> +++ b/arch/arm64/include/asm/io.h
> @@ -24,25 +24,25 @@
> #define __raw_writeb __raw_writeb
> static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
> {
> - asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
> + asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> }
>
> #define __raw_writew __raw_writew
> static inline void __raw_writew(u16 val, volatile void __iomem *addr)
> {
> - asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
> + asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> }
>
> #define __raw_writel __raw_writel
> static __always_inline void __raw_writel(u32 val, volatile void __iomem *addr)
> {
> - asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
> + asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> }
>
> #define __raw_writeq __raw_writeq
> static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
> {
> - asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
> + asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> }
>
> #define __raw_readb __raw_readb
> --
> 2.35.1
>


2022-04-05 01:48:27

by Andrew Pinski

[permalink] [raw]
Subject: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect

On Fri, Apr 1, 2022 at 10:24 AM Mark Rutland via Gcc <[email protected]> wrote:
>
> Hi Jeremy,
>
> Thanks for raising this.
>
> On Fri, Apr 01, 2022 at 11:44:06AM -0500, Jeremy Linton wrote:
> > The relaxed variants of read/write macros are only declared
> > as `asm volatile()` which forces the compiler to generate the
> > instruction in the code path as intended. The only problem
> > is that it doesn't also tell the compiler that there may
> > be memory side effects. Meaning that if a function is comprised
> > entirely of relaxed io operations, the compiler may think that
> > it only has register side effects and doesn't need to be called.
>
> As I mentioned on a private mail, I don't think that reasoning above is
> correct, and I think this is a miscompilation (i.e. a compiler bug).
>
> The important thing is that any `asm volatile` may have a side effects
> generally outside of memory or GPRs, and whether the assembly contains a memory
> load/store is immaterial. We should not need to add a memory clobber in order
> to retain the volatile semantic.
>
> See:
>
> https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Volatile
>
> ... and consider the x86 example that reads rdtsc, or an arm64 sequence like:
>
> | void do_sysreg_thing(void)
> | {
> | unsigned long tmp;
> |
> | tmp = read_sysreg(some_reg);
> | tmp |= SOME_BIT;
> | write_sysreg(some_reg);
> | }
>
> ... where there's no memory that we should need to hazard against.
>
> This patch might workaround the issue, but I don't believe it is a correct fix.

It might not be the most restricted fix but it is a fix.
The best fix is to tell that you are writing to that location of memory.
volatile asm does not do what you think it does.
You didn't read further down about memory clobbers:
https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Clobbers-and-Scratch-Registers
Specifically this part:
The "memory" clobber tells the compiler that the assembly code
performs memory reads or writes to items other than those listed in
the input and output operands




>
> > For an example function look at bcmgenet_enable_dma(), before the
> > relaxed variants were removed. When built with gcc12 the code
> > contains the asm blocks as expected, but then the function is
> > never called.
>
> So it sounds like this is a regression in GCC 12, which IIUC isn't released yet
> per:

It is NOT a bug in GCC 12. Just you depended on behavior which
accidently worked in the cases you were looking at. GCC 12 did not
change in this area at all even.

Thanks,
Andrew Pinski

>
> https://gcc.gnu.org/gcc-12/changes.html
>
> ... which says:
>
> | Note: GCC 12 has not been released yet
>
> Surely we can fix it prior to release?
>
> Thanks,
> Mark.
>
> >
> > Signed-off-by: Jeremy Linton <[email protected]>
> > ---
> > arch/arm64/include/asm/io.h | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
> > index 7fd836bea7eb..3cceda7948a0 100644
> > --- a/arch/arm64/include/asm/io.h
> > +++ b/arch/arm64/include/asm/io.h
> > @@ -24,25 +24,25 @@
> > #define __raw_writeb __raw_writeb
> > static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
> > {
> > - asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
> > + asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> > }
> >
> > #define __raw_writew __raw_writew
> > static inline void __raw_writew(u16 val, volatile void __iomem *addr)
> > {
> > - asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
> > + asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> > }
> >
> > #define __raw_writel __raw_writel
> > static __always_inline void __raw_writel(u32 val, volatile void __iomem *addr)
> > {
> > - asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
> > + asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> > }
> >
> > #define __raw_writeq __raw_writeq
> > static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
> > {
> > - asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
> > + asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
> > }
> >
> > #define __raw_readb __raw_readb
> > --
> > 2.35.1
> >

2022-04-06 14:41:17

by Mark Rutland

[permalink] [raw]
Subject: GCC 12 miscompilation of volatile asm (was: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect)

Hi all,

[adding kernel folk who work on asm stuff]

As a heads-up, GCC 12 (not yet released) appears to erroneously optimize away
calls to functions with volatile asm. Szabolcs has raised an issue on the GCC
bugzilla:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105160

... which is a P1 release blocker, and is currently being investigated.

Jemery originally reported this as an issue with {readl,writel}_relaxed(), but
the underlying problem doesn't have anything to do with those specifically.

I'm dumping a bunch of info here largely for posterity / archival, and to find
out who (from the kernel side) is willing and able to test proposed compiler
fixes, once those are available.

I'm happy to do so for aarch64; Peter, I assume you'd be happy to look at the
x86 side?

This is a generic issue, and

I wrote test cases for aarch64 and x86_64. Those are inline later in this mail,
and currently you can see them on compiler explorer:

aarch64: https://godbolt.org/z/vMczqjYvs

x86_64: https://godbolt.org/z/cveff9hq5



My aarch64 test case is:

| #define sysreg_read(regname) \
| ({ \
| unsigned long __sr_val; \
| asm volatile( \
| "mrs %0, " #regname "\n" \
| : "=r" (__sr_val)); \
| \
| __sr_val; \
| })
|
| #define sysreg_write(regname, __sw_val) \
| do { \
| asm volatile( \
| "msr " #regname ", %0\n" \
| : \
| : "r" (__sw_val)); \
| } while (0)
|
| #define isb() \
| do { \
| asm volatile( \
| "isb" \
| : \
| : \
| : "memory"); \
| } while (0)
|
| static unsigned long sctlr_read(void)
| {
| return sysreg_read(sctlr_el1);
| }
|
| static void sctlr_write(unsigned long val)
| {
| sysreg_write(sctlr_el1, val);
| }
|
| static void sctlr_rmw(void)
| {
| unsigned long val;
|
| val = sctlr_read();
| val |= 1UL << 7;
| sctlr_write(val);
| }
|
| void sctlr_read_multiple(void)
| {
| sctlr_read();
| sctlr_read();
| sctlr_read();
| sctlr_read();
| }
|
| void sctlr_write_multiple(void)
| {
| sctlr_write(0);
| sctlr_write(0);
| sctlr_write(0);
| sctlr_write(0);
| sctlr_write(0);
| }
|
| void sctlr_rmw_multiple(void)
| {
| sctlr_rmw();
| sctlr_rmw();
| sctlr_rmw();
| sctlr_rmw();
| }
|
| void function(void)
| {
| sctlr_read_multiple();
| sctlr_write_multiple();
| sctlr_rmw_multiple();
|
| isb();
| }

Per compiler explorer (https://godbolt.org/z/vMczqjYvs) GCC trunk currently
compiles this as:

| sctlr_rmw:
| mrs x0, sctlr_el1
| orr x0, x0, 128
| msr sctlr_el1, x0
| ret
| sctlr_read_multiple:
| mrs x0, sctlr_el1
| mrs x0, sctlr_el1
| mrs x0, sctlr_el1
| mrs x0, sctlr_el1
| ret
| sctlr_write_multiple:
| mov x0, 0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| ret
| sctlr_rmw_multiple:
| ret
| function:
| isb
| ret

Whereas GCC 11.2 compiles this as:

| sctlr_rmw:
| mrs x0, sctlr_el1
| orr x0, x0, 128
| msr sctlr_el1, x0
| ret
| sctlr_read_multiple:
| mrs x0, sctlr_el1
| mrs x0, sctlr_el1
| mrs x0, sctlr_el1
| mrs x0, sctlr_el1
| ret
| sctlr_write_multiple:
| mov x0, 0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| msr sctlr_el1, x0
| ret
| sctlr_rmw_multiple:
| stp x29, x30, [sp, -16]!
| mov x29, sp
| bl sctlr_rmw
| bl sctlr_rmw
| bl sctlr_rmw
| bl sctlr_rmw
| ldp x29, x30, [sp], 16
| ret
| function:
| stp x29, x30, [sp, -16]!
| mov x29, sp
| bl sctlr_read_multiple
| bl sctlr_write_multiple
| bl sctlr_rmw_multiple
| isb
| ldp x29, x30, [sp], 16
| ret



My x86_64 test case is:

| unsigned long rdmsr(unsigned long reg)
| {
| unsigned int lo, hi;
|
| asm volatile(
| "rdmsr"
| : "=d" (hi), "=a" (lo)
| : "c" (reg)
| );
|
| return ((unsigned long)hi << 32) | lo;
| }
|
| void wrmsr(unsigned long reg, unsigned long val)
| {
| unsigned int lo = val;
| unsigned int hi = val >> 32;
|
| asm volatile(
| "wrmsr"
| :
| : "d" (hi), "a" (lo), "c" (reg)
| );
| }
|
| void msr_rmw_set_bits(unsigned long reg, unsigned long bits)
| {
| unsigned long val;
|
| val = rdmsr(reg);
| val |= bits;
| wrmsr(reg, val);
| }
|
| void func_with_msr_side_effects(unsigned long reg)
| {
| msr_rmw_set_bits(reg, 1UL << 0);
| msr_rmw_set_bits(reg, 1UL << 1);
| msr_rmw_set_bits(reg, 1UL << 2);
| msr_rmw_set_bits(reg, 1UL << 3);
| }

Per compiler explorer (https://godbolt.org/z/cveff9hq5) GCC trunk currently
compiles this as:

| msr_rmw_set_bits:
| mov rcx, rdi
| rdmsr
| sal rdx, 32
| mov eax, eax
| or rax, rsi
| or rax, rdx
| mov rdx, rax
| shr rdx, 32
| wrmsr
| ret
| func_with_msr_side_effects:
| ret

While GCC 11.2 compiles that as:

| msr_rmw_set_bits:
| mov rcx, rdi
| rdmsr
| sal rdx, 32
| mov eax, eax
| or rax, rsi
| or rax, rdx
| mov rdx, rax
| shr rdx, 32
| wrmsr
| ret
| func_with_msr_side_effects:
| push rbp
| push rbx
| mov rbx, rdi
| mov rbp, rsi
| call msr_rmw_set_bits
| mov rsi, rbp
| mov rdi, rbx
| call msr_rmw_set_bits
| mov rsi, rbp
| mov rdi, rbx
| call msr_rmw_set_bits
| mov rsi, rbp
| mov rdi, rbx
| call msr_rmw_set_bits
| pop rbx
| pop rbp
| ret

Thanks,
Mark.

2022-04-11 23:42:24

by Mark Rutland

[permalink] [raw]
Subject: Re: GCC 12 miscompilation of volatile asm (was: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect)

On Tue, Apr 05, 2022 at 01:51:30PM +0100, Mark Rutland wrote:
> Hi all,
>
> [adding kernel folk who work on asm stuff]
>
> As a heads-up, GCC 12 (not yet released) appears to erroneously optimize away
> calls to functions with volatile asm. Szabolcs has raised an issue on the GCC
> bugzilla:
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105160
>
> ... which is a P1 release blocker, and is currently being investigated.

Jan Hubicka fixed this in GCC commit:

aabb9a261ef060cf ("Propagate nondeterministic and side_effects flags in modref summary after inlining")

... and all my local tests look good with that applied.

Compiler explorer's trunk build now has that fix, so the examples from before
now look good:

aarch64: https://godbolt.org/z/vMczqjYvs

x86_64: https://godbolt.org/z/cveff9hq5

Jeremy, now that the real issue has been identified and fixed, I assume you'll
send a revert for commit:

8d3ea3d402db94b6 ("net: bcmgenet: Use stronger register read/writes to assure ordering")

... ?

Thanks,
Mark.

2022-04-12 12:04:17

by Jeremy Linton

[permalink] [raw]
Subject: Re: GCC 12 miscompilation of volatile asm (was: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect)

Hi,


On 4/11/22 05:31, Mark Rutland wrote:
> On Tue, Apr 05, 2022 at 01:51:30PM +0100, Mark Rutland wrote:
>> Hi all,
>>
>> [adding kernel folk who work on asm stuff]
>>
>> As a heads-up, GCC 12 (not yet released) appears to erroneously optimize away
>> calls to functions with volatile asm. Szabolcs has raised an issue on the GCC
>> bugzilla:
>>
>> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105160
>>
>> ... which is a P1 release blocker, and is currently being investigated.
>
> Jan Hubicka fixed this in GCC commit:
>
> aabb9a261ef060cf ("Propagate nondeterministic and side_effects flags in modref summary after inlining")
>
> ... and all my local tests look good with that applied.
>
> Compiler explorer's trunk build now has that fix, so the examples from before
> now look good:
>
> aarch64: https://godbolt.org/z/vMczqjYvs
>
> x86_64: https://godbolt.org/z/cveff9hq5
>
> Jeremy, now that the real issue has been identified and fixed, I assume you'll
> send a revert for commit:
>
> 8d3ea3d402db94b6 ("net: bcmgenet: Use stronger register read/writes to assure ordering")
>
> ... ?

Yes, that's the plan.

Thanks,