2022-04-04 08:48:57

by Jeremy Linton

[permalink] [raw]
Subject: [PATCH] arm64/io: Remind compiler that there is a memory side effect

The relaxed variants of read/write macros are only declared
as `asm volatile()` which forces the compiler to generate the
instruction in the code path as intended. The only problem
is that it doesn't also tell the compiler that there may
be memory side effects. Meaning that if a function is comprised
entirely of relaxed io operations, the compiler may think that
it only has register side effects and doesn't need to be called.

For an example function look at bcmgenet_enable_dma(), before the
relaxed variants were removed. When built with gcc12 the code
contains the asm blocks as expected, but then the function is
never called.

Signed-off-by: Jeremy Linton <[email protected]>
---
arch/arm64/include/asm/io.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index 7fd836bea7eb..3cceda7948a0 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -24,25 +24,25 @@
#define __raw_writeb __raw_writeb
static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
{
- asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
+ asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
}

#define __raw_writew __raw_writew
static inline void __raw_writew(u16 val, volatile void __iomem *addr)
{
- asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
+ asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
}

#define __raw_writel __raw_writel
static __always_inline void __raw_writel(u32 val, volatile void __iomem *addr)
{
- asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
+ asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
}

#define __raw_writeq __raw_writeq
static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
{
- asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
+ asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
}

#define __raw_readb __raw_readb
--
2.35.1


2022-04-04 12:02:54

by Doug Berger

[permalink] [raw]
Subject: Re: [PATCH] arm64/io: Remind compiler that there is a memory side effect

On 4/3/2022 12:36 AM, Andrew Pinski wrote:
> On Fri, Apr 1, 2022 at 10:24 AM Mark Rutland via Gcc <[email protected]> wrote:
>>
>> Hi Jeremy,
>>
>> Thanks for raising this.
>>
>> On Fri, Apr 01, 2022 at 11:44:06AM -0500, Jeremy Linton wrote:
>>> The relaxed variants of read/write macros are only declared
>>> as `asm volatile()` which forces the compiler to generate the
>>> instruction in the code path as intended. The only problem
>>> is that it doesn't also tell the compiler that there may
>>> be memory side effects. Meaning that if a function is comprised
>>> entirely of relaxed io operations, the compiler may think that
>>> it only has register side effects and doesn't need to be called.
>>
>> As I mentioned on a private mail, I don't think that reasoning above is
>> correct, and I think this is a miscompilation (i.e. a compiler bug).
>>
>> The important thing is that any `asm volatile` may have a side effects
>> generally outside of memory or GPRs, and whether the assembly contains a memory
>> load/store is immaterial. We should not need to add a memory clobber in order
>> to retain the volatile semantic.
>>
>> See:
>>
>> https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Volatile
>>
>> ... and consider the x86 example that reads rdtsc, or an arm64 sequence like:
>>
>> | void do_sysreg_thing(void)
>> | {
>> | unsigned long tmp;
>> |
>> | tmp = read_sysreg(some_reg);
>> | tmp |= SOME_BIT;
>> | write_sysreg(some_reg);
>> | }
>>
>> ... where there's no memory that we should need to hazard against.
>>
>> This patch might workaround the issue, but I don't believe it is a correct fix.

I agree with Mark that this patch is an attempt to work around a bug in
the GCC 12 compiler.
>
> It might not be the most restricted fix but it is a fix.
> The best fix is to tell that you are writing to that location of memory.
> volatile asm does not do what you think it does.
> You didn't read further down about memory clobbers:
> https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Clobbers-and-Scratch-Registers
> Specifically this part:
> The "memory" clobber tells the compiler that the assembly code
> performs memory reads or writes to items other than those listed in
> the input and output operands

I agree that volatile asm does not do what I think it should do in this
case, but it appears to me that it does not do what the documentation
states that it should do, and that is the bug in GCC 12.

My interpretation of the referenced documentation is that the volatile
qualifier of the asm keyword should prevent the GCC optimizer from
performing certain optimizations. Of specific relevance in this scenario
is the optimizers that "sometimes discard asm statements if they
determine there is no need for the output variables."

The clobbers tell the compiler about side effects or dependencies that
the asm block may have that could be relevant to code outside the asm
block so that proper functionality can be preserved and the optimizer
can still do a good job. The functions in this patch do access memory
(well technically registers...) and therefore adding the "memory"
clobber is not "wrong", but the read variants of these functions also
access memory so adding the "memory" clobber to them would be equally
appropriate (or inappropriate). This would not affect the functionality,
but it is "heavy-handed" and can have an unnecessary effect on performance.

The "memory" clobber indicates that memory is somehow affected by the
asm block and therefore requires the compiler to flush data in working
registers to memory before the block and reload values from memory after
the block. A better solution is to communicate the side effects more
precisely to avoid operations that can be determined to be unnecessary.

In the case of these functions, the only address accessed is in register
space. Accesses to register space can have all kinds of side effects,
but these side effects are communicated to the compiler by declaring the
addr formal parameter with the volatile and __iomem attributes. In this
way it is clear to the compiler that any writes to addr by code before
the start of the asm block must occur before entering the block and any
accesses to addr by code after the block must occur after executing the
block such that the use of the "memory" clobber is unnecessary.

>
>>
>>> For an example function look at bcmgenet_enable_dma(), before the
>>> relaxed variants were removed. When built with gcc12 the code
>>> contains the asm blocks as expected, but then the function is
>>> never called.
>>
>> So it sounds like this is a regression in GCC 12, which IIUC isn't released yet
>> per:
>
> It is NOT a bug in GCC 12. Just you depended on behavior which
> accidently worked in the cases you were looking at. GCC 12 did not
> change in this area at all even.

GCC 12 should not have changed in this area, but the evidence suggests
that in fact the behavior has changed such that an asm volatile block
can be discarded by an optimizer. This appears unintentional and is
therefore a bug that should be corrected before release of the toolchain
since it could potentially affect any asm volatile block in the Linux
source.

Regards,
Doug

>
> Thanks,
> Andrew Pinski
>
>>
>> https://gcc.gnu.org/gcc-12/changes.html
>>
>> ... which says:
>>
>> | Note: GCC 12 has not been released yet
>>
>> Surely we can fix it prior to release?
>>
>> Thanks,
>> Mark.
>>
>>>
>>> Signed-off-by: Jeremy Linton <[email protected]>
>>> ---
>>> arch/arm64/include/asm/io.h | 8 ++++----
>>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
>>> index 7fd836bea7eb..3cceda7948a0 100644
>>> --- a/arch/arm64/include/asm/io.h
>>> +++ b/arch/arm64/include/asm/io.h
>>> @@ -24,25 +24,25 @@
>>> #define __raw_writeb __raw_writeb
>>> static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
>>> {
>>> - asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr));
>>> + asm volatile("strb %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
>>> }
>>>
>>> #define __raw_writew __raw_writew
>>> static inline void __raw_writew(u16 val, volatile void __iomem *addr)
>>> {
>>> - asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr));
>>> + asm volatile("strh %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
>>> }
>>>
>>> #define __raw_writel __raw_writel
>>> static __always_inline void __raw_writel(u32 val, volatile void __iomem *addr)
>>> {
>>> - asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr));
>>> + asm volatile("str %w0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
>>> }
>>>
>>> #define __raw_writeq __raw_writeq
>>> static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
>>> {
>>> - asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
>>> + asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr) : "memory");
>>> }
>>>
>>> #define __raw_readb __raw_readb
>>> --
>>> 2.35.1
>>>