2019-12-06 12:47:03

by Michael Ellerman

[permalink] [raw]
Subject: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Linus,

Please pull another powerpc update for 5.5.

As you'll see from the diffstat this is mostly not powerpc code. In order to do
KASAN instrumentation of bitops we needed to juggle some of the generic bitops
headers.

Because those changes potentially affect several architectures I wasn't
confident putting them directly into my tree, so I've had them sitting in a
topic branch. That branch (topic/kasan-bitops) has been in linux-next for a
month, and I've not had any feedback that it's caused any problems.

So I think this is good to merge, but it's a standalone pull so if anyone does
object it's not a problem.

cheers


The following changes since commit da0c9ea146cbe92b832f1b0f694840ea8eb33cce:

Linux 5.4-rc2 (2019-10-06 14:27:30 -0700)

are available in the git repository at:

https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git tags/powerpc-5.5-2

for you to fetch changes up to 4f4afc2c9599520300b3f2b3666d2034fca03df3:

docs/core-api: Remove possibly confusing sub-headings from Bit Operations (2019-12-04 21:20:28 +1100)

- ------------------------------------------------------------------
powerpc updates for 5.5 #2

A few commits splitting the KASAN instrumented bitops header in
three, to match the split of the asm-generic bitops headers.

This is needed on powerpc because we use asm-generic/bitops/non-atomic.h,
for the non-atomic bitops, whereas the existing KASAN instrumented
bitops assume all the underlying operations are provided by the arch
as arch_foo() versions.

Thanks to:
Daniel Axtens & Christophe Leroy.

- ------------------------------------------------------------------
Daniel Axtens (2):
kasan: support instrumented bitops combined with generic bitops
powerpc: support KASAN instrumentation of bitops

Michael Ellerman (1):
docs/core-api: Remove possibly confusing sub-headings from Bit Operations


Documentation/core-api/kernel-api.rst | 8 +-
arch/powerpc/include/asm/bitops.h | 51 ++--
arch/s390/include/asm/bitops.h | 4 +-
arch/x86/include/asm/bitops.h | 4 +-
include/asm-generic/bitops-instrumented.h | 263 --------------------
include/asm-generic/bitops/instrumented-atomic.h | 100 ++++++++
include/asm-generic/bitops/instrumented-lock.h | 81 ++++++
include/asm-generic/bitops/instrumented-non-atomic.h | 114 +++++++++
8 files changed, 337 insertions(+), 288 deletions(-)
delete mode 100644 include/asm-generic/bitops-instrumented.h
create mode 100644 include/asm-generic/bitops/instrumented-atomic.h
create mode 100644 include/asm-generic/bitops/instrumented-lock.h
create mode 100644 include/asm-generic/bitops/instrumented-non-atomic.h
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAl3qSS4ACgkQUevqPMjh
pYCp1Q//TrG2tPMDPHpWqCzNdWoh96zpIo2UsauDcc8l+XT7shkwHcGnpoECgCfK
NjhP77qqXI61E+5qUCfO16/j5g6PbvvG/E/xlQEdgX7lIxBeGs4IkoRU8QjkJ9w5
wAjG/XwaMJ21CQY2F51dn9NPQUvFxKV0o6QJ+/pIFBnv0eeYCtRWno7+tZGIiMhk
ExfJhR0rnBdBc6oonNOTAfWn5u51FRRqUeICeo4iFoICu5v4cTbPiU3/8bZYzhSb
wM9WdG+/IUs02PffIQF4GDyMmzi/Qm3Ujl3tUIEaFHlfN9pF6X7Yog7Co26CShJj
No4wJK5rS3ECXmwo7Yd69sV9FZrMZZvGY9x7p7bEE7mqk1fHMaM3DMXvR8Gx6UGM
NCXX2QIIigz3RUTbj3CW2iZa9R/FTSFXs3Ih4YDDJdPNanYpcX3/wE6mpwsco8do
lxWcN1AMGXLiaNdQ8IkRZ6hOLH/Po34RvDo1P1mS06NzfyyTZW7JNiUtU2HSqPRs
vjIkHDM7585ika6jeDHU4cJaLy7bsCNV2fLsHWDE3Xno43g7qcKGOx+PtO25XubZ
iP1vojR4Qml+e3ySf6dDiOIDltSWZwjCGtbi2gmdErHiLdLeJX2XGjC36Qnep6u6
15HIWzX41tg8y4QRJDmPyeDm3Ccbabz+m4LaccbdObgGWVwxwgA=
=06Wr
-----END PGP SIGNATURE-----


2019-12-06 13:17:56

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)

On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Hi Linus,
>
> Please pull another powerpc update for 5.5.
>
> As you'll see from the diffstat this is mostly not powerpc code. In order to do
> KASAN instrumentation of bitops we needed to juggle some of the generic bitops
> headers.
>
> Because those changes potentially affect several architectures I wasn't
> confident putting them directly into my tree, so I've had them sitting in a
> topic branch. That branch (topic/kasan-bitops) has been in linux-next for a
> month, and I've not had any feedback that it's caused any problems.
>
> So I think this is good to merge, but it's a standalone pull so if anyone does
> object it's not a problem.

No objections, but here:

https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=topic/kasan-bitops&id=81d2c6f81996e01fbcd2b5aeefbb519e21c806e9

you write:

"Currently bitops-instrumented.h assumes that the architecture provides
atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
This is true on x86 and s390, but is not always true: there is a
generic bitops/non-atomic.h header that provides generic non-atomic
operations, and also a generic bitops/lock.h for locking operations."

Is there any actual benefit for PPC to using their own atomic bitops
over bitops/lock.h ? I'm thinking that the generic code is fairly
optimal for most LL/SC architectures.

I've been meaning to audit the various architectures and move them over,
but alas, it's something I've not yet had time for...

2019-12-06 22:16:09

by pr-tracker-bot

[permalink] [raw]
Subject: Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)

The pull request you sent on Fri, 06 Dec 2019 23:46:11 +1100:

> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git tags/powerpc-5.5-2

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/43a2898631a8beee66c1d64c1e860f43d96b2e91

Thank you!

--
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker

2019-12-10 05:39:55

by Michael Ellerman

[permalink] [raw]
Subject: Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)

Peter Zijlstra <[email protected]> writes:
> On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> Hi Linus,
>>
>> Please pull another powerpc update for 5.5.
>>
>> As you'll see from the diffstat this is mostly not powerpc code. In order to do
>> KASAN instrumentation of bitops we needed to juggle some of the generic bitops
>> headers.
>>
>> Because those changes potentially affect several architectures I wasn't
>> confident putting them directly into my tree, so I've had them sitting in a
>> topic branch. That branch (topic/kasan-bitops) has been in linux-next for a
>> month, and I've not had any feedback that it's caused any problems.
>>
>> So I think this is good to merge, but it's a standalone pull so if anyone does
>> object it's not a problem.
>
> No objections, but here:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?h=topic/kasan-bitops&id=81d2c6f81996e01fbcd2b5aeefbb519e21c806e9
>
> you write:
>
> "Currently bitops-instrumented.h assumes that the architecture provides
> atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
> This is true on x86 and s390, but is not always true: there is a
> generic bitops/non-atomic.h header that provides generic non-atomic
> operations, and also a generic bitops/lock.h for locking operations."
>
> Is there any actual benefit for PPC to using their own atomic bitops
> over bitops/lock.h ? I'm thinking that the generic code is fairly
> optimal for most LL/SC architectures.

Good question, I'll have a look.

There seems to be confusion about what the type of the bit number is,
which is leading to sign extension in some cases and not others.

eg, comparing the generic clear_bit_unlock() vs ours:

1 c000000000031890 <generic_clear_bit_unlock>: 1 c0000000000319a0 <ppc_clear_bit_unlock>:
2 extsw r3,r3
3 li r10,1
4 srawi r9,r3,6
5 addze r9,r9
6 rlwinm r8,r9,6,0,25
7 extsw r9,r9
8 subf r3,r8,r3
2 rlwinm r9,r3,29,3,28 9 rldicr r9,r9,3,60
10 sld r3,r10,r3
3 add r4,r4,r9 11 add r4,r4,r9
4 lwsync 12 lwsync
5 li r9,-2
6 clrlwi r3,r3,26
7 rotld r3,r9,r3
8 ldarx r9,0,r4 13 ldarx r9,0,r4
9 and r10,r3,r9 14 andc r9,r9,r3
10 stdcx. r10,0,r4 15 stdcx. r9,0,r4
11 bne- <generic_clear_bit_unlock+0x18> 16 bne- <ppc_clear_bit_unlock+0x2c>
12 blr 17 blr

It looks like in actual usage it often doesn't matter, ie. when we pass
a constant bit number it all gets inlined and the compiler works it out.

It looks like the type should be unsigned long?

Documentation/core-api/atomic_ops.rst: void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
arch/mips/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
arch/powerpc/include/asm/bitops.h:static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
arch/riscv/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
arch/s390/include/asm/bitops.h:static inline void arch___clear_bit_unlock(unsigned long nr,
include/asm-generic/bitops/instrumented-lock.h:static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
include/asm-generic/bitops/lock.h:static inline void __clear_bit_unlock(unsigned int nr,

So I guess step one is to convert our versions to use unsigned long, so
we're at least not tripping over that difference when comparing the
assembly.

cheers

2019-12-10 10:16:57

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)

On Tue, Dec 10, 2019 at 04:38:54PM +1100, Michael Ellerman wrote:

> Good question, I'll have a look.
>
> There seems to be confusion about what the type of the bit number is,
> which is leading to sign extension in some cases and not others.

Shiny.

> It looks like the type should be unsigned long?

I'm thinking unsigned makes most sense, I mean, negative bit offsets
should 'work' but that's almost always guaranteed to be an out-of-bound
operation.

As to 'long' vs 'int', I'm not sure, 4G bits is a long bitmap. But I
suppose since the bitmap itself is 'unsigned long', we might as well use
'unsigned long' for the bitnr too.

> Documentation/core-api/atomic_ops.rst: void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
> arch/mips/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
> arch/powerpc/include/asm/bitops.h:static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
> arch/riscv/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
> arch/s390/include/asm/bitops.h:static inline void arch___clear_bit_unlock(unsigned long nr,
> include/asm-generic/bitops/instrumented-lock.h:static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
> include/asm-generic/bitops/lock.h:static inline void __clear_bit_unlock(unsigned int nr,
>
> So I guess step one is to convert our versions to use unsigned long, so
> we're at least not tripping over that difference when comparing the
> assembly.

Yeah, I'll look at fixing the generic code, bitops/atomic.h and
bitops/non-atomic.h don't even agree on the type of bitnr.

2019-12-11 00:30:23

by Michael Ellerman

[permalink] [raw]
Subject: Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops)

Peter Zijlstra <[email protected]> writes:
> On Tue, Dec 10, 2019 at 04:38:54PM +1100, Michael Ellerman wrote:
>
>> Good question, I'll have a look.
>>
>> There seems to be confusion about what the type of the bit number is,
>> which is leading to sign extension in some cases and not others.
>
> Shiny.
>
>> It looks like the type should be unsigned long?
>
> I'm thinking unsigned makes most sense, I mean, negative bit offsets
> should 'work' but that's almost always guaranteed to be an out-of-bound
> operation.

Yeah I agree.

> As to 'long' vs 'int', I'm not sure, 4G bits is a long bitmap. But I
> suppose since the bitmap itself is 'unsigned long', we might as well use
> 'unsigned long' for the bitnr too.

4G is a lot of bits, but it's not *that* many.

eg. If we had a bit per 4K page on a 32T machine that would be 8G bits.

So unsigned long seems best.

>> Documentation/core-api/atomic_ops.rst: void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
>> arch/mips/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
>> arch/powerpc/include/asm/bitops.h:static inline void arch___clear_bit_unlock(int nr, volatile unsigned long *addr)
>> arch/riscv/include/asm/bitops.h:static inline void __clear_bit_unlock(unsigned long nr, volatile unsigned long *addr)
>> arch/s390/include/asm/bitops.h:static inline void arch___clear_bit_unlock(unsigned long nr,
>> include/asm-generic/bitops/instrumented-lock.h:static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
>> include/asm-generic/bitops/lock.h:static inline void __clear_bit_unlock(unsigned int nr,
>>
>> So I guess step one is to convert our versions to use unsigned long, so
>> we're at least not tripping over that difference when comparing the
>> assembly.
>
> Yeah, I'll look at fixing the generic code, bitops/atomic.h and
> bitops/non-atomic.h don't even agree on the type of bitnr.

Thanks.

cheers

2019-12-12 05:43:17

by Michael Ellerman

[permalink] [raw]
Subject: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

[ trimmed CC a bit ]

Peter Zijlstra <[email protected]> writes:
> On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
...
> you write:
>
> "Currently bitops-instrumented.h assumes that the architecture provides
> atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
> This is true on x86 and s390, but is not always true: there is a
> generic bitops/non-atomic.h header that provides generic non-atomic
> operations, and also a generic bitops/lock.h for locking operations."
>
> Is there any actual benefit for PPC to using their own atomic bitops
> over bitops/lock.h ? I'm thinking that the generic code is fairly
> optimal for most LL/SC architectures.

Yes and no :)

Some of the generic versions don't generate good code compared to our
versions, but that's because READ_ONCE() is triggering stack protector
to be enabled.

For example, comparing an out-of-line copy of the generic and ppc
versions of test_and_set_bit_lock():

1 <generic_test_and_set_bit_lock>: 1 <ppc_test_and_set_bit_lock>:
2 addis r2,r12,361
3 addi r2,r2,-4240
4 stdu r1,-48(r1)
5 rlwinm r8,r3,29,3,28
6 clrlwi r10,r3,26 2 rldicl r10,r3,58,6
7 ld r9,3320(r13)
8 std r9,40(r1)
9 li r9,0
10 li r9,1 3 li r9,1
4 clrlwi r3,r3,26
5 rldicr r10,r10,3,60
11 sld r9,r9,r10 6 sld r3,r9,r3
12 add r10,r4,r8 7 add r4,r4,r10
13 ldx r8,r4,r8
14 and. r8,r9,r8
15 bne 34f
16 ldarx r7,0,r10 8 ldarx r9,0,r4,1
17 or r8,r9,r7 9 or r10,r9,r3
18 stdcx. r8,0,r10 10 stdcx. r10,0,r4
19 bne- 16b 11 bne- 8b
20 isync 12 isync
21 and r9,r7,r9 13 and r3,r3,r9
22 addic r7,r9,-1 14 addic r9,r3,-1
23 subfe r7,r7,r9 15 subfe r3,r9,r3
24 ld r9,40(r1)
25 ld r10,3320(r13)
26 xor. r9,r9,r10
27 li r10,0
28 mr r3,r7
29 bne 36f
30 addi r1,r1,48
31 blr 16 blr
32 nop
33 nop
34 li r7,1
35 b 24b
36 mflr r0
37 std r0,64(r1)
38 bl <__stack_chk_fail+0x8>


If you squint, the generated code for the actual logic is pretty similar, but
the stack protector gunk makes a big mess. It's particularly bad here
because the ppc version doesn't even need a stack frame.

I've also confirmed that even when test_and_set_bit_lock() is inlined
into an actual call site the stack protector logic still triggers.

eg, if I make two versions of ext4_resize_begin() which call the generic or ppc
version of test_and_set_bit_lock(), the generic version gets a bunch of extra
stack protector code.

1 c0000000005336e0 <ext4_resize_begin_generic>: 1 c0000000005335b0 <ext4_resize_begin_ppc>:
2 addis r2,r12,281 2 addis r2,r12,281
3 addi r2,r2,-12256 3 addi r2,r2,-11952
4 mflr r0 4 mflr r0
5 bl <_mcount> 5 bl <_mcount>
6 mflr r0 6 mflr r0
7 std r31,-8(r1) 7 std r31,-8(r1)
8 std r30,-16(r1) 8 std r30,-16(r1)
9 mr r31,r3 9 mr r31,r3
10 li r3,24 10 li r3,24
11 std r0,16(r1) 11 std r0,16(r1)
12 stdu r1,-128(r1) 12 stdu r1,-112(r1)
13 ld r9,3320(r13)
14 std r9,104(r1)
15 li r9,0
16 ld r30,920(r31) 13 ld r30,920(r31)
17 bl <capable+0x8> 14 bl <capable+0x8>
18 nop 15 nop
19 cmpdi cr7,r3,0 16 cmpdi cr7,r3,0
20 beq cr7,<ext4_resize_begin_generic+0xf0> 17 beq cr7,<ext4_resize_begin_ppc+0xc0>
21 ld r9,920(r31) 18 ld r9,920(r31)
22 ld r10,96(r30) 19 ld r10,96(r30)
23 lwz r7,84(r30) 20 lwz r7,84(r30)
24 ld r8,104(r9) 21 ld r8,104(r9)
25 ld r10,24(r10) 22 ld r10,24(r10)
26 lwz r8,20(r8) 23 lwz r8,20(r8)
27 srd r10,r10,r7 24 srd r10,r10,r7
28 cmpd cr7,r10,r8 25 cmpd cr7,r10,r8
29 bne cr7,<ext4_resize_begin_generic+0x128> 26 bne cr7,<ext4_resize_begin_ppc+0xf8>
30 lhz r10,160(r9) 27 lhz r10,160(r9)
31 andi. r10,r10,2 28 andi. r10,r10,2
32 bne <ext4_resize_begin_generic+0x100>
33 ld r10,560(r9)
34 andi. r10,r10,1
35 bne <ext4_resize_begin_generic+0xe0> 29 bne <ext4_resize_begin_ppc+0xd0>
36 addi r7,r9,560 30 addi r9,r9,560
37 li r8,1 31 li r10,1
38 ldarx r10,0,r7 32 ldarx r3,0,r9,1
39 or r6,r8,r10 33 or r8,r3,r10
40 stdcx. r6,0,r7 34 stdcx. r8,0,r9
41 bne- <ext4_resize_begin_generic+0x90> 35 bne- <ext4_resize_begin_ppc+0x78>
42 isync 36 isync
37 clrldi r3,r3,63
43 andi. r9,r10,1 38 addi r3,r3,-1
44 li r3,0 39 rlwinm r3,r3,0,27,27
45 bne <ext4_resize_begin_generic+0xe0> 40 addi r3,r3,-16
46 ld r9,104(r1)
47 ld r10,3320(r13)
48 xor. r9,r9,r10
49 li r10,0
50 bne <ext4_resize_begin_generic+0x158>
51 addi r1,r1,128 41 addi r1,r1,112
52 ld r0,16(r1) 42 ld r0,16(r1)
53 ld r30,-16(r1) 43 ld r30,-16(r1)
54 ld r31,-8(r1) 44 ld r31,-8(r1)
55 mtlr r0 45 mtlr r0
56 blr 46 blr
57 nop 47 nop
58 li r3,-16
59 b <ext4_resize_begin_generic+0xb0>
60 nop 48 nop
61 nop 49 nop
62 li r3,-1 50 li r3,-1
63 b <ext4_resize_begin_generic+0xb0> 51 b <ext4_resize_begin_ppc+0x9c>
64 nop 52 nop
65 nop 53 nop
66 addis r6,r2,-118 54 addis r6,r2,-118
67 addis r4,r2,-140 55 addis r4,r2,-140
68 mr r3,r31 56 mr r3,r31
69 li r5,97 57 li r5,46
70 addi r6,r6,30288 58 addi r6,r6,30288
71 addi r4,r4,3064 59 addi r4,r4,3040
72 bl <__ext4_warning+0x8> 60 bl <__ext4_warning+0x8>
73 nop 61 nop
74 li r3,-1 62 li r3,-1
75 b <ext4_resize_begin_generic+0xb0> 63 b <ext4_resize_begin_ppc+0x9c>
76 ld r9,96(r9) 64 ld r9,96(r9)
77 addis r6,r2,-118 65 addis r6,r2,-118
78 addis r4,r2,-140 66 addis r4,r2,-140
79 mr r3,r31 67 mr r3,r31
80 li r5,87 68 li r5,36
81 addi r6,r6,30240 69 addi r6,r6,30240
82 addi r4,r4,3064 70 addi r4,r4,3040
83 ld r7,24(r9) 71 ld r7,24(r9)
84 bl <__ext4_warning+0x8> 72 bl <__ext4_warning+0x8>
85 nop 73 nop
86 li r3,-1 74 li r3,-1
87 b <ext4_resize_begin_generic+0xb0> 75 b <ext4_resize_begin_ppc+0x9c>
88 bl <__stack_chk_fail+0x8>


If I change the READ_ONCE() in test_and_set_bit_lock():

if (READ_ONCE(*p) & mask)
return 1;

to a regular pointer access:

if (*p & mask)
return 1;

Then the generated code looks more or less the same, except for the extra early
return in the generic version of test_and_set_bit_lock(), and different handling
of the return code by the compiler.

1 <ext4_resize_begin_generic>: 1 <ext4_resize_begin_ppc>:
2 addis r2,r12,281 2 addis r2,r12,281
3 addi r2,r2,-12256 3 addi r2,r2,-11952
4 mflr r0 4 mflr r0
5 bl <_mcount> 5 bl <_mcount>
6 mflr r0 6 mflr r0
7 std r31,-8(r1) 7 std r31,-8(r1)
8 std r30,-16(r1) 8 std r30,-16(r1)
9 mr r31,r3 9 mr r31,r3
10 li r3,24 10 li r3,24
11 std r0,16(r1) 11 std r0,16(r1)
12 stdu r1,-112(r1) 12 stdu r1,-112(r1)
13 ld r30,920(r31) 13 ld r30,920(r31)
14 bl <capable+0x8> 14 bl <capable+0x8>
15 nop 15 nop
16 cmpdi cr7,r3,0 16 cmpdi cr7,r3,0
17 beq cr7,<ext4_resize_begin_generic+0xe0> 17 beq cr7,<ext4_resize_begin_ppc+0xc0>
18 ld r9,920(r31) 18 ld r9,920(r31)
19 ld r10,96(r30) 19 ld r10,96(r30)
20 lwz r7,84(r30) 20 lwz r7,84(r30)
21 ld r8,104(r9) 21 ld r8,104(r9)
22 ld r10,24(r10) 22 ld r10,24(r10)
23 lwz r8,20(r8) 23 lwz r8,20(r8)
24 srd r10,r10,r7 24 srd r10,r10,r7
25 cmpd cr7,r10,r8 25 cmpd cr7,r10,r8
26 bne cr7,<ext4_resize_begin_generic+0x118> 26 bne cr7,<ext4_resize_begin_ppc+0xf8>
27 lhz r10,160(r9) 27 lhz r10,160(r9)
28 andi. r10,r10,2 28 andi. r10,r10,2
29 bne <ext4_resize_begin_generic+0xf0> 29 bne <ext4_resize_begin_ppc+0xd0>
30 ld r10,560(r9)
31 andi. r10,r10,1
32 bne <ext4_resize_begin_generic+0xc0>
33 addi r7,r9,560 30 addi r9,r9,560
34 li r8,1 31 li r10,1
35 ldarx r10,0,r7 32 ldarx r3,0,r9,1
36 or r6,r8,r10 33 or r8,r3,r10
37 stdcx. r6,0,r7 34 stdcx. r8,0,r9
38 bne- <ext4_resize_begin_generic+0x84> 35 bne- <ext4_resize_begin_ppc+0x78>
39 isync 36 isync
37 clrldi r3,r3,63
40 andi. r9,r10,1 38 addi r3,r3,-1
41 li r3,0 39 rlwinm r3,r3,0,27,27
42 bne <ext4_resize_begin_generic+0xc0> 40 addi r3,r3,-16
43 addi r1,r1,112 41 addi r1,r1,112
44 ld r0,16(r1) 42 ld r0,16(r1)
45 ld r30,-16(r1) 43 ld r30,-16(r1)
46 ld r31,-8(r1) 44 ld r31,-8(r1)
47 mtlr r0 45 mtlr r0
48 blr 46 blr
49 nop 47 nop
50 addi r1,r1,112 48 nop
51 li r3,-16
52 ld r0,16(r1)
53 ld r30,-16(r1)
54 ld r31,-8(r1)
55 mtlr r0
56 blr
57 nop 49 nop
58 li r3,-1 50 li r3,-1
59 b <ext4_resize_begin_generic+0xa4> 51 b <ext4_resize_begin_ppc+0x9c>
60 nop 52 nop
61 nop 53 nop
62 addis r6,r2,-118 54 addis r6,r2,-118
63 addis r4,r2,-140 55 addis r4,r2,-140
64 mr r3,r31 56 mr r3,r31
65 li r5,97 57 li r5,46
66 addi r6,r6,30288 58 addi r6,r6,30288
67 addi r4,r4,3064 59 addi r4,r4,3040
68 bl <__ext4_warning+0x8> 60 bl <__ext4_warning+0x8>
69 nop 61 nop
70 li r3,-1 62 li r3,-1
71 b <ext4_resize_begin_generic+0xa4> 63 b <ext4_resize_begin_ppc+0x9c>
72 ld r9,96(r9) 64 ld r9,96(r9)
73 addis r6,r2,-118 65 addis r6,r2,-118
74 addis r4,r2,-140 66 addis r4,r2,-140
75 mr r3,r31 67 mr r3,r31
76 li r5,87 68 li r5,36
77 addi r6,r6,30240 69 addi r6,r6,30240
78 addi r4,r4,3064 70 addi r4,r4,3040
79 ld r7,24(r9) 71 ld r7,24(r9)
80 bl <__ext4_warning+0x8> 72 bl <__ext4_warning+0x8>
81 nop 73 nop
82 li r3,-1 74 li r3,-1
83 b <ext4_resize_begin_generic+0xa4> 75 b <ext4_resize_begin_ppc+0x9c>


So READ_ONCE() + STACKPROTECTOR_STRONG is problematic. The root cause is
presumably that READ_ONCE() does an access to an on-stack variable,
which triggers the heuristics in the compiler that the stack needs
protecting.

It seems like a compiler "mis-feature" that a constant-sized access to the stack
triggers the stack protector logic, especially when the access is eventually
optimised away. But I guess that's probably what we get for doing tricks like
READ_ONCE() in the first place :/

I tried going back to the version of READ_ONCE() that doesn't use a
union, ie. effectively reverting dd36929720f4 ("kernel: make READ_ONCE()
valid on const arguments") to get:

#define READ_ONCE(x) \
({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })

But it makes no difference, the stack protector stuff still triggers. So
I guess it's simply taking the address of a stack variable that triggers
it.

There seems to be a function attribute to enable stack protector for a
function, but not one to disable it:
https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Common-Function-Attributes.html#index-stack_005fprotect-function-attribute

That may not be a good solution even if it did exist, because it would
potentially disable stack protector in places where we do want it
enabled.

cheers

2019-12-12 08:02:23

by Peter Zijlstra

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote:
> [ trimmed CC a bit ]
>
> Peter Zijlstra <[email protected]> writes:
> > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
> ...
> > you write:
> >
> > "Currently bitops-instrumented.h assumes that the architecture provides
> > atomic, non-atomic and locking bitops (e.g. both set_bit and __set_bit).
> > This is true on x86 and s390, but is not always true: there is a
> > generic bitops/non-atomic.h header that provides generic non-atomic
> > operations, and also a generic bitops/lock.h for locking operations."
> >
> > Is there any actual benefit for PPC to using their own atomic bitops
> > over bitops/lock.h ? I'm thinking that the generic code is fairly
> > optimal for most LL/SC architectures.
>
> Yes and no :)
>
> Some of the generic versions don't generate good code compared to our
> versions, but that's because READ_ONCE() is triggering stack protector
> to be enabled.

Bah, there's never anything simple, is there :/

> For example, comparing an out-of-line copy of the generic and ppc
> versions of test_and_set_bit_lock():
>
> 1 <generic_test_and_set_bit_lock>: 1 <ppc_test_and_set_bit_lock>:
> 2 addis r2,r12,361
> 3 addi r2,r2,-4240
> 4 stdu r1,-48(r1)
> 5 rlwinm r8,r3,29,3,28
> 6 clrlwi r10,r3,26 2 rldicl r10,r3,58,6
> 7 ld r9,3320(r13)
> 8 std r9,40(r1)
> 9 li r9,0
> 10 li r9,1 3 li r9,1
> 4 clrlwi r3,r3,26
> 5 rldicr r10,r10,3,60
> 11 sld r9,r9,r10 6 sld r3,r9,r3
> 12 add r10,r4,r8 7 add r4,r4,r10
> 13 ldx r8,r4,r8
> 14 and. r8,r9,r8
> 15 bne 34f
> 16 ldarx r7,0,r10 8 ldarx r9,0,r4,1
> 17 or r8,r9,r7 9 or r10,r9,r3
> 18 stdcx. r8,0,r10 10 stdcx. r10,0,r4
> 19 bne- 16b 11 bne- 8b
> 20 isync 12 isync
> 21 and r9,r7,r9 13 and r3,r3,r9
> 22 addic r7,r9,-1 14 addic r9,r3,-1
> 23 subfe r7,r7,r9 15 subfe r3,r9,r3
> 24 ld r9,40(r1)
> 25 ld r10,3320(r13)
> 26 xor. r9,r9,r10
> 27 li r10,0
> 28 mr r3,r7
> 29 bne 36f
> 30 addi r1,r1,48
> 31 blr 16 blr
> 32 nop
> 33 nop
> 34 li r7,1
> 35 b 24b
> 36 mflr r0
> 37 std r0,64(r1)
> 38 bl <__stack_chk_fail+0x8>
>
>
> If you squint, the generated code for the actual logic is pretty similar, but
> the stack protector gunk makes a big mess. It's particularly bad here
> because the ppc version doesn't even need a stack frame.
>
> I've also confirmed that even when test_and_set_bit_lock() is inlined
> into an actual call site the stack protector logic still triggers.

> If I change the READ_ONCE() in test_and_set_bit_lock():
>
> if (READ_ONCE(*p) & mask)
> return 1;
>
> to a regular pointer access:
>
> if (*p & mask)
> return 1;
>
> Then the generated code looks more or less the same, except for the extra early
> return in the generic version of test_and_set_bit_lock(), and different handling
> of the return code by the compiler.

So given that the function signature is:

static inline int test_and_set_bit_lock(unsigned int nr,
volatile unsigned long *p)

@p already carries the required volatile qualifier, so READ_ONCE() does
not add anything here (except for easier to read code and poor code
generation).

So your proposed change _should_ be fine. Will, I'm assuming you never
saw this on your ARGH64 builds when you did this code ?

---
diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index dd90c9792909..10264e8808f8 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -35,7 +35,7 @@ static inline int test_and_set_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);

p += BIT_WORD(nr);
- if (READ_ONCE(*p) & mask)
+ if (*p & mask)
return 1;

old = atomic_long_fetch_or(mask, (atomic_long_t *)p);
@@ -48,7 +48,7 @@ static inline int test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);

p += BIT_WORD(nr);
- if (!(READ_ONCE(*p) & mask))
+ if (!(*p & mask))
return 0;

old = atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 3ae021368f48..9baf0a0055b8 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -22,7 +22,7 @@ static inline int test_and_set_bit_lock(unsigned int nr,
unsigned long mask = BIT_MASK(nr);

p += BIT_WORD(nr);
- if (READ_ONCE(*p) & mask)
+ if (*p & mask)
return 1;

old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
@@ -60,7 +60,7 @@ static inline void __clear_bit_unlock(unsigned int nr,
unsigned long old;

p += BIT_WORD(nr);
- old = READ_ONCE(*p);
+ old = *p;
old &= ~BIT_MASK(nr);
atomic_long_set_release((atomic_long_t *)p, old);
}

2019-12-12 10:09:03

by Will Deacon

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 09:01:05AM +0100, Peter Zijlstra wrote:
> On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote:
> > Peter Zijlstra <[email protected]> writes:
> > > On Fri, Dec 06, 2019 at 11:46:11PM +1100, Michael Ellerman wrote:
> > Some of the generic versions don't generate good code compared to our
> > versions, but that's because READ_ONCE() is triggering stack protector
> > to be enabled.
>
> Bah, there's never anything simple, is there :/
>
> > For example, comparing an out-of-line copy of the generic and ppc
> > versions of test_and_set_bit_lock():
> >
> > 1 <generic_test_and_set_bit_lock>: 1 <ppc_test_and_set_bit_lock>:
> > 2 addis r2,r12,361
> > 3 addi r2,r2,-4240
> > 4 stdu r1,-48(r1)
> > 5 rlwinm r8,r3,29,3,28
> > 6 clrlwi r10,r3,26 2 rldicl r10,r3,58,6
> > 7 ld r9,3320(r13)
> > 8 std r9,40(r1)
> > 9 li r9,0
> > 10 li r9,1 3 li r9,1
> > 4 clrlwi r3,r3,26
> > 5 rldicr r10,r10,3,60
> > 11 sld r9,r9,r10 6 sld r3,r9,r3
> > 12 add r10,r4,r8 7 add r4,r4,r10
> > 13 ldx r8,r4,r8
> > 14 and. r8,r9,r8
> > 15 bne 34f
> > 16 ldarx r7,0,r10 8 ldarx r9,0,r4,1
> > 17 or r8,r9,r7 9 or r10,r9,r3
> > 18 stdcx. r8,0,r10 10 stdcx. r10,0,r4
> > 19 bne- 16b 11 bne- 8b
> > 20 isync 12 isync
> > 21 and r9,r7,r9 13 and r3,r3,r9
> > 22 addic r7,r9,-1 14 addic r9,r3,-1
> > 23 subfe r7,r7,r9 15 subfe r3,r9,r3
> > 24 ld r9,40(r1)
> > 25 ld r10,3320(r13)
> > 26 xor. r9,r9,r10
> > 27 li r10,0
> > 28 mr r3,r7
> > 29 bne 36f
> > 30 addi r1,r1,48
> > 31 blr 16 blr
> > 32 nop
> > 33 nop
> > 34 li r7,1
> > 35 b 24b
> > 36 mflr r0
> > 37 std r0,64(r1)
> > 38 bl <__stack_chk_fail+0x8>
> >
> >
> > If you squint, the generated code for the actual logic is pretty similar, but
> > the stack protector gunk makes a big mess. It's particularly bad here
> > because the ppc version doesn't even need a stack frame.
> >
> > I've also confirmed that even when test_and_set_bit_lock() is inlined
> > into an actual call site the stack protector logic still triggers.
>
> > If I change the READ_ONCE() in test_and_set_bit_lock():
> >
> > if (READ_ONCE(*p) & mask)
> > return 1;
> >
> > to a regular pointer access:
> >
> > if (*p & mask)
> > return 1;
> >
> > Then the generated code looks more or less the same, except for the extra early
> > return in the generic version of test_and_set_bit_lock(), and different handling
> > of the return code by the compiler.
>
> So given that the function signature is:
>
> static inline int test_and_set_bit_lock(unsigned int nr,
> volatile unsigned long *p)
>
> @p already carries the required volatile qualifier, so READ_ONCE() does
> not add anything here (except for easier to read code and poor code
> generation).
>
> So your proposed change _should_ be fine. Will, I'm assuming you never
> saw this on your ARGH64 builds when you did this code ?

I did see it, but (a) looking at the code out-of-line makes it look a lot
worse than it actually is (so the ext4 example is really helpful -- thanks
Michael!) and (b) I chalked it up to a crappy compiler.

However, see this comment from Arnd on my READ_ONCE series from the other
day:

https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com

In which case, I'm thinking that we should be doing better in READ_ONCE()
for non-buggy compilers which would also keep the KCSAN folks happy for this
code (and would help with [1] too).

Will

[1] https://lkml.org/lkml/2019/11/12/898

2019-12-12 10:47:46

by Peter Zijlstra

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:

> > So your proposed change _should_ be fine. Will, I'm assuming you never
> > saw this on your ARGH64 builds when you did this code ?
>
> I did see it, but (a) looking at the code out-of-line makes it look a lot
> worse than it actually is (so the ext4 example is really helpful -- thanks
> Michael!) and (b) I chalked it up to a crappy compiler.
>
> However, see this comment from Arnd on my READ_ONCE series from the other
> day:
>
> https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
>
> In which case, I'm thinking that we should be doing better in READ_ONCE()
> for non-buggy compilers which would also keep the KCSAN folks happy for this
> code (and would help with [1] too).

So something like this then? Although I suppose that should be moved
into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.

---

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ad8c76144a3c..8326e2cf28b4 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -179,20 +179,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,

#include <uapi/linux/types.h>
#include <linux/kcsan-checks.h>
-
-#define __READ_ONCE_SIZE \
-({ \
- switch (size) { \
- case 1: *(__u8 *)res = *(volatile __u8 *)p; break; \
- case 2: *(__u16 *)res = *(volatile __u16 *)p; break; \
- case 4: *(__u32 *)res = *(volatile __u32 *)p; break; \
- case 8: *(__u64 *)res = *(volatile __u64 *)p; break; \
- default: \
- barrier(); \
- __builtin_memcpy((void *)res, (const void *)p, size); \
- barrier(); \
- } \
-})
+#include <asm/barrier.h>
+#include <linux/kasan-checks.h>

#ifdef CONFIG_KASAN
/*
@@ -222,6 +210,22 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
#define __no_sanitize_or_inline __always_inline
#endif

+#ifdef GCC_VERSION < 40800
+
+#define __READ_ONCE_SIZE \
+({ \
+ switch (size) { \
+ case 1: *(__u8 *)res = *(volatile __u8 *)p; break; \
+ case 2: *(__u16 *)res = *(volatile __u16 *)p; break; \
+ case 4: *(__u32 *)res = *(volatile __u32 *)p; break; \
+ case 8: *(__u64 *)res = *(volatile __u64 *)p; break; \
+ default: \
+ barrier(); \
+ __builtin_memcpy((void *)res, (const void *)p, size); \
+ barrier(); \
+ } \
+})
+
static __no_kcsan_or_inline
void __read_once_size(const volatile void *p, void *res, int size)
{
@@ -274,9 +278,6 @@ void __write_once_size(volatile void *p, void *res, int size)
* with an explicit memory barrier or atomic instruction that provides the
* required ordering.
*/
-#include <asm/barrier.h>
-#include <linux/kasan-checks.h>
-
#define __READ_ONCE(x, check) \
({ \
union { typeof(x) __val; char __c[1]; } __u; \
@@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
*/
#define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)

+#else /* GCC_VERSION < 40800 */
+
+#define READ_ONCE_NOCHECK(x) \
+({ \
+ typeof(x) __x = *(volatile typeof(x))&(x); \
+ smp_read_barrier_depends(); \
+ __x;
+})
+
+#define READ_ONCE(x) \
+({ \
+ kcsan_check_atomic_read(&(x), sizeof(x)); \
+ READ_ONCE_NOCHECK(x); \
+})
+
+#endif /* GCC_VERSION < 40800 */
+
static __no_kasan_or_inline
unsigned long read_word_at_a_time(const void *addr)
{

2019-12-12 15:12:49

by Segher Boessenkool

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

Hi,

On Thu, Dec 12, 2019 at 04:42:13PM +1100, Michael Ellerman wrote:
> Some of the generic versions don't generate good code compared to our
> versions, but that's because READ_ONCE() is triggering stack protector
> to be enabled.

The *big* difference is the generic code has a special path that does not
do an atomic access at all. Either that is a good idea or not, but we
probably should not change the behaviour here, not without benchmarking
anyway.

> For example, comparing an out-of-line copy of the generic and ppc
> versions of test_and_set_bit_lock():

(With what GCC version, and what exact flags?)

(A stand-alone testcase would be nice too, btw).

(Michael gave me one, thanks!)

> If you squint, the generated code for the actual logic is pretty similar, but
> the stack protector gunk makes a big mess.

And with stack protector it cannot shrink-wrap the exit, one of the bigger
performance costs of the stack protector. The extra branch in the generic
code isn't fun either (but maybe it is good for performance?

> It's particularly bad here
> because the ppc version doesn't even need a stack frame.

You are hit by this:

if (... || (RECORD_OR_UNION_TYPE_P (var_type)
&& record_or_union_type_has_array_p (var_type)) ...)

(in the GCC code, stack_protect_decl_p (), cfgexpand.c)

for the variable __u from

#define __READ_ONCE(x, check) \
({ \
union { typeof(x) __val; char __c[1]; } __u; \
__read_once_size(&(x), __u.__c, sizeof(x)); \
smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
__u.__val; \
})

This is all optimised away later, but at the point this decision is made
GCC does not know that.

> So READ_ONCE() + STACKPROTECTOR_STRONG is problematic. The root cause is
> presumably that READ_ONCE() does an access to an on-stack variable,
> which triggers the heuristics in the compiler that the stack needs
> protecting.

Not exactly, but the problem is READ_ONCE alright.

> It seems like a compiler "mis-feature" that a constant-sized access to the stack
> triggers the stack protector logic, especially when the access is eventually
> optimised away. But I guess that's probably what we get for doing tricks like
> READ_ONCE() in the first place :/

__c is an array. That is all that matters. I don't think it is very
reasonable to fault GCC for this.

> I tried going back to the version of READ_ONCE() that doesn't use a
> union, ie. effectively reverting dd36929720f4 ("kernel: make READ_ONCE()
> valid on const arguments") to get:
>
> #define READ_ONCE(x) \
> ({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })

With that, it is that the address of __val is taken:

...
|| TREE_ADDRESSABLE (var)
...

> But it makes no difference, the stack protector stuff still triggers. So
> I guess it's simply taking the address of a stack variable that triggers
> it.

Not in the earlier testcase. Btw, there is no such thing as a "stack
variable" at that point in the compiler: it just is a local var.

> There seems to be a function attribute to enable stack protector for a
> function, but not one to disable it:
> https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Common-Function-Attributes.html#index-stack_005fprotect-function-attribute

Yes.

> That may not be a good solution even if it did exist, because it would
> potentially disable stack protector in places where we do want it
> enabled.

Right, I don't think we want that, such an attribute invites people to
write dangerous code. (You already can just put the functions that you
want to be unsafe in a separate source file... It sounds even sillier
that way, heh).


Segher

2019-12-12 17:06:00

by Will Deacon

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 11:46:10AM +0100, Peter Zijlstra wrote:
> On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:
>
> > > So your proposed change _should_ be fine. Will, I'm assuming you never
> > > saw this on your ARGH64 builds when you did this code ?
> >
> > I did see it, but (a) looking at the code out-of-line makes it look a lot
> > worse than it actually is (so the ext4 example is really helpful -- thanks
> > Michael!) and (b) I chalked it up to a crappy compiler.
> >
> > However, see this comment from Arnd on my READ_ONCE series from the other
> > day:
> >
> > https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
> >
> > In which case, I'm thinking that we should be doing better in READ_ONCE()
> > for non-buggy compilers which would also keep the KCSAN folks happy for this
> > code (and would help with [1] too).
>
> So something like this then? Although I suppose that should be moved
> into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.

Ah wait, I think we've been looking at this wrong. The volatile pointer
argument is actually the problem here, not READ_ONCE()! The use of typeof()
means that the temporary variable to which __READ_ONCE_SIZE writes ends up
being a volatile store, so it can't be optimised away. This is why we get
a stack access and why stack protector then wrecks the codegen for us.

I'll cook a patch getting rid of those volatiles.

Will

2019-12-12 17:18:49

by Will Deacon

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 05:04:27PM +0000, Will Deacon wrote:
> On Thu, Dec 12, 2019 at 11:46:10AM +0100, Peter Zijlstra wrote:
> > On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:
> >
> > > > So your proposed change _should_ be fine. Will, I'm assuming you never
> > > > saw this on your ARGH64 builds when you did this code ?
> > >
> > > I did see it, but (a) looking at the code out-of-line makes it look a lot
> > > worse than it actually is (so the ext4 example is really helpful -- thanks
> > > Michael!) and (b) I chalked it up to a crappy compiler.
> > >
> > > However, see this comment from Arnd on my READ_ONCE series from the other
> > > day:
> > >
> > > https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
> > >
> > > In which case, I'm thinking that we should be doing better in READ_ONCE()
> > > for non-buggy compilers which would also keep the KCSAN folks happy for this
> > > code (and would help with [1] too).
> >
> > So something like this then? Although I suppose that should be moved
> > into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.
>
> Ah wait, I think we've been looking at this wrong. The volatile pointer
> argument is actually the problem here, not READ_ONCE()! The use of typeof()
> means that the temporary variable to which __READ_ONCE_SIZE writes ends up
> being a volatile store, so it can't be optimised away. This is why we get
> a stack access and why stack protector then wrecks the codegen for us.

Hmm, it's actually probably the volatile read which is causing the problem,
since __READ_ONCE_SIZE has casted that away and just uses "void *", but you
get the idea.

Will

2019-12-12 17:43:59

by Linus Torvalds

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra <[email protected]> wrote:
>
> +#ifdef GCC_VERSION < 40800

Where does that 4.8 version check come from, and why?

Yeah, I know, but this really wants a comment. Sadly it looks like gcc
bugzilla is down, so

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145

currently gives an "Internal Server Error" for me.

[ Delete the horrid code we have because of gcc bugs ]

> +#else /* GCC_VERSION < 40800 */
> +
> +#define READ_ONCE_NOCHECK(x) \
> +({ \
> + typeof(x) __x = *(volatile typeof(x))&(x); \

I think we can/should just do this unconditionally if it helps th eissue.

Maybe add a warning about how gcc < 4.8 might mis-compile the kernel -
those versions are getting close to being unacceptable for kernel
builds anyway.

We could also look at being stricter for the normal READ/WRITE_ONCE(),
and require that they are

(a) regular integer types

(b) fit in an atomic word

We actually did (b) for a while, until we noticed that we do it on
loff_t's etc and relaxed the rules. But maybe we could have a
"non-atomic" version of READ/WRITE_ONCE() that is used for the
questionable cases?

Linus

2019-12-12 17:52:16

by Segher Boessenkool

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote:
> Yeah, I know, but this really wants a comment. Sadly it looks like gcc
> bugzilla is down, so
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
>
> currently gives an "Internal Server Error" for me.

We're being DoSsed again. Reload, it will work after a while :-/


Segher

2019-12-12 18:07:42

by Will Deacon

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote:
> On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra <[email protected]> wrote:
> >
> > +#ifdef GCC_VERSION < 40800
>
> Where does that 4.8 version check come from, and why?
>
> Yeah, I know, but this really wants a comment. Sadly it looks like gcc
> bugzilla is down, so
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
>
> currently gives an "Internal Server Error" for me.
>
> [ Delete the horrid code we have because of gcc bugs ]
>
> > +#else /* GCC_VERSION < 40800 */
> > +
> > +#define READ_ONCE_NOCHECK(x) \
> > +({ \
> > + typeof(x) __x = *(volatile typeof(x))&(x); \
>
> I think we can/should just do this unconditionally if it helps th eissue.

I'm currently trying to solve the issue by removing volatile from the bitop
function signatures, but it's grotty because there are quite a few callers
to fix up. I'm still trying to do it, because removing volatile fields from
structurs is generally a "good thing", but I'd be keen to simplify
READ_ONCE() as you suggest regardless.

> Maybe add a warning about how gcc < 4.8 might mis-compile the kernel -
> those versions are getting close to being unacceptable for kernel
> builds anyway.
>
> We could also look at being stricter for the normal READ/WRITE_ONCE(),
> and require that they are
>
> (a) regular integer types
>
> (b) fit in an atomic word
>
> We actually did (b) for a while, until we noticed that we do it on
> loff_t's etc and relaxed the rules. But maybe we could have a
> "non-atomic" version of READ/WRITE_ONCE() that is used for the
> questionable cases?

That makes a lot of sense to me, and it would allow us to use
compiletime_assert_atomic_type() as we do for the acquire/release accessors.

Will

2019-12-12 18:31:02

by Christian Borntraeger

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))



On 12.12.19 19:06, Will Deacon wrote:
> On Thu, Dec 12, 2019 at 09:41:32AM -0800, Linus Torvalds wrote:
>> On Thu, Dec 12, 2019 at 2:46 AM Peter Zijlstra <[email protected]> wrote:
>>>
>>> +#ifdef GCC_VERSION < 40800
>>
>> Where does that 4.8 version check come from, and why?
>>
>> Yeah, I know, but this really wants a comment. Sadly it looks like gcc
>> bugzilla is down, so
>>
>> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145
>>
>> currently gives an "Internal Server Error" for me.
>>
>> [ Delete the horrid code we have because of gcc bugs ]
>>
>>> +#else /* GCC_VERSION < 40800 */
>>> +
>>> +#define READ_ONCE_NOCHECK(x) \
>>> +({ \
>>> + typeof(x) __x = *(volatile typeof(x))&(x); \
>>
>> I think we can/should just do this unconditionally if it helps th eissue.
>
> I'm currently trying to solve the issue by removing volatile from the bitop
> function signatures, but it's grotty because there are quite a few callers
> to fix up. I'm still trying to do it, because removing volatile fields from
> structurs is generally a "good thing", but I'd be keen to simplify
> READ_ONCE() as you suggest regardless.

As I am the one who added the foundation of READ_ONCEs uglyness, I am now in
favour of re-simplifying it again. I was first a bit scared about re-introducing
bugs, but the gcc testsuite has this particular case covered, so hopefully we
should not see the issue with volatile and aggregate types again.

Christian

2019-12-12 18:45:06

by Linus Torvalds

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 10:06 AM Will Deacon <[email protected]> wrote:
>
> I'm currently trying to solve the issue by removing volatile from the bitop
> function signatures

I really think that's the wrong thing to do.

The bitop signature really should be "volatile" (and it should be
"const volatile" for test_bit, but I'm not sure anybody cares).

Exactly because it's simply valid to say "hey, my data is volatile,
but do an atomic test of this bit". So it might be volatile in the
caller.

Now, I generally frown on actual volatile data structures - because
the data structure volatility often depends on _context_. The same
data might be volatile in one context (when you do some optimistic
test on it without locking), but 100% stable in another (when you do
have a lock).

So I don't want to see "volatile" on data definitions ("jiffies" being
the one traditional exception), but marking things volatile in code
(because you know you're working with unlocked data) and then passing
them down to various helper functions - including the bitops ones - is
quite traditional and accepted.

In other words, 'volatile" should be treated the same way "const" is
largely treated in C.

A pointer to "const" data doesn't mean that the data is read-only, or
that it cannot be modified _elsewhere_, it means that within this
particular context and this copy of the pointer we promise not to
write to it.

Similarly, a pointer to "volatile" data doesn't mean that the data
might not be stable once you take a lock, for example. So it's ok to
have volatile pointers even if the data declaration itself isn't
volatile - you're stating something about the context, not something
fundamental about the data.

And in the context of the bit operations, "volatile" is the correct thing to do.

Linus

2019-12-12 19:34:54

by Will Deacon

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

Hi Linus,

On Thu, Dec 12, 2019 at 10:43:05AM -0800, Linus Torvalds wrote:
> On Thu, Dec 12, 2019 at 10:06 AM Will Deacon <[email protected]> wrote:
> >
> > I'm currently trying to solve the issue by removing volatile from the bitop
> > function signatures
>
> I really think that's the wrong thing to do.
>
> The bitop signature really should be "volatile" (and it should be
> "const volatile" for test_bit, but I'm not sure anybody cares).

Agreed on the "const" part, although I do think the "volatile" aspect has
nasty side-effects despite being a visual indicator that we're eliding
locks. More below.

> Exactly because it's simply valid to say "hey, my data is volatile,
> but do an atomic test of this bit". So it might be volatile in the
> caller.

That's fair, although the cases I've run into so far for the bitops are
usually just that the functions have been wrapped, and volatile could easily
be dropped from the caller as well (e.g. assign_bit(), __node_clear(),
linkmode_test_bit()).

> Now, I generally frown on actual volatile data structures - because
> the data structure volatility often depends on _context_. The same
> data might be volatile in one context (when you do some optimistic
> test on it without locking), but 100% stable in another (when you do
> have a lock).

There are cases in driver code where it looks as though data members are
being declared volatile specifically because of the bitops type signatures
(e.g. 'wrapped' in 'struct mdp5_mdss', 'context_flag' in 'struct
drm_device', 'state' in 'struct s2io_nic'). Yeah, it's bogus, but I think
that having the modifier in the function signature is still leading people
astray.

> So I don't want to see "volatile" on data definitions ("jiffies" being
> the one traditional exception), but marking things volatile in code
> (because you know you're working with unlocked data) and then passing
> them down to various helper functions - including the bitops ones - is
> quite traditional and accepted.
>
> In other words, 'volatile" should be treated the same way "const" is
> largely treated in C.
>
> A pointer to "const" data doesn't mean that the data is read-only, or
> that it cannot be modified _elsewhere_, it means that within this
> particular context and this copy of the pointer we promise not to
> write to it.
>
> Similarly, a pointer to "volatile" data doesn't mean that the data
> might not be stable once you take a lock, for example. So it's ok to
> have volatile pointers even if the data declaration itself isn't
> volatile - you're stating something about the context, not something
> fundamental about the data.
>
> And in the context of the bit operations, "volatile" is the correct thing
> to do.

The root of my concern in all of this, and what started me looking at it in
the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
for a pointer means that local variables in macros declared using typeof()
suddenly start generating *hideous* code, particularly when pointless stack
spills get stackprotector all excited. Even if we simplify READ_ONCE() back
to its old incantation, the acquire/release accessors will have the exact
same issues on architectures that implement them.

For example, consider this code on arm64:

void ool_store_release(unsigned long *ptr, unsigned long val)
{
smp_store_release(ptr, val);
}

This compiles to a single instruction plus return, which is what we want:

0000000000000000 <ool_store_release>:
0: c89ffc01 stlr x1, [x0]
4: d65f03c0 ret

Now, see what happens if we make the 'ptr' argument volatile:

void ool_store_release(volatile unsigned long *ptr, unsigned long val)
{
smp_store_release(ptr, val);
}

0000000000000000 <ool_store_release>:
0: a9be7bfd stp x29, x30, [sp, #-32]!
4: 90000002 adrp x2, 0 <__stack_chk_guard>
8: 91000042 add x2, x2, #0x0
c: 910003fd mov x29, sp
10: f9400043 ldr x3, [x2]
14: f9000fa3 str x3, [x29, #24]
18: d2800003 mov x3, #0x0 // #0
1c: c89ffc01 stlr x1, [x0]
20: f9400fa1 ldr x1, [x29, #24]
24: f9400040 ldr x0, [x2]
28: ca000020 eor x0, x1, x0
2c: b5000060 cbnz x0, 38 <ool_store_release+0x38>
30: a8c27bfd ldp x29, x30, [sp], #32
34: d65f03c0 ret
38: 94000000 bl 0 <__stack_chk_fail>

It's a mess, and fixing READ_ONCE() doesn't help this case, which is why
I was looking at getting rid of volatile where it's not strictly needed.
I'm certainly open to other suggestions, I just haven't managed to think
of anything else.

Will

2019-12-12 20:24:12

by Peter Zijlstra

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 07:34:01PM +0000, Will Deacon wrote:
> void ool_store_release(volatile unsigned long *ptr, unsigned long val)
> {
> smp_store_release(ptr, val);
> }
>
> 0000000000000000 <ool_store_release>:
> 0: a9be7bfd stp x29, x30, [sp, #-32]!
> 4: 90000002 adrp x2, 0 <__stack_chk_guard>
> 8: 91000042 add x2, x2, #0x0
> c: 910003fd mov x29, sp
> 10: f9400043 ldr x3, [x2]
> 14: f9000fa3 str x3, [x29, #24]
> 18: d2800003 mov x3, #0x0 // #0
> 1c: c89ffc01 stlr x1, [x0]
> 20: f9400fa1 ldr x1, [x29, #24]
> 24: f9400040 ldr x0, [x2]
> 28: ca000020 eor x0, x1, x0
> 2c: b5000060 cbnz x0, 38 <ool_store_release+0x38>
> 30: a8c27bfd ldp x29, x30, [sp], #32
> 34: d65f03c0 ret
> 38: 94000000 bl 0 <__stack_chk_fail>
>
> It's a mess, and fixing READ_ONCE() doesn't help this case, which is why
> I was looking at getting rid of volatile where it's not strictly needed.
> I'm certainly open to other suggestions, I just haven't managed to think
> of anything else.

We could move the kernel to C++ and write:

std::remove_volatile<typeof(p)>::type __p = (p);

/me runs like hell...

2019-12-12 20:52:58

by Linus Torvalds

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <[email protected]> wrote:
>
> The root of my concern in all of this, and what started me looking at it in
> the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> for a pointer means that local variables in macros declared using typeof()
> suddenly start generating *hideous* code, particularly when pointless stack
> spills get stackprotector all excited.

Yeah, removing volatile can be a bit annoying.

For the particular case of the bitops, though, it's not an issue.
Since you know the type there, you can just cast it.

And if we had the rule that READ_ONCE() was an arithmetic type, you could do

typeof(0+(*p)) __var;

since you might as well get the integer promotion anyway (on the
non-volatile result).

But that doesn't work with structures or unions, of course.

I'm not entirely sure we have READ_ONCE() with a struct. I do know we
have it with 64-bit entities on 32-bit machines, but that's ok with
the "0+" trick.

Linus

2019-12-12 21:46:29

by Peter Zijlstra

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 09:21:57PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 12, 2019 at 07:34:01PM +0000, Will Deacon wrote:
> > void ool_store_release(volatile unsigned long *ptr, unsigned long val)
> > {
> > smp_store_release(ptr, val);
> > }
> >
> > 0000000000000000 <ool_store_release>:
> > 0: a9be7bfd stp x29, x30, [sp, #-32]!
> > 4: 90000002 adrp x2, 0 <__stack_chk_guard>
> > 8: 91000042 add x2, x2, #0x0
> > c: 910003fd mov x29, sp
> > 10: f9400043 ldr x3, [x2]
> > 14: f9000fa3 str x3, [x29, #24]
> > 18: d2800003 mov x3, #0x0 // #0
> > 1c: c89ffc01 stlr x1, [x0]
> > 20: f9400fa1 ldr x1, [x29, #24]
> > 24: f9400040 ldr x0, [x2]
> > 28: ca000020 eor x0, x1, x0
> > 2c: b5000060 cbnz x0, 38 <ool_store_release+0x38>
> > 30: a8c27bfd ldp x29, x30, [sp], #32
> > 34: d65f03c0 ret
> > 38: 94000000 bl 0 <__stack_chk_fail>
> >
> > It's a mess, and fixing READ_ONCE() doesn't help this case, which is why
> > I was looking at getting rid of volatile where it's not strictly needed.
> > I'm certainly open to other suggestions, I just haven't managed to think
> > of anything else.
>
> We could move the kernel to C++ and write:
>
> std::remove_volatile<typeof(p)>::type __p = (p);
>
> /me runs like hell...

Also, the GCC __auto_type thing strips _Atomic and const qualifiers but
for some obscure raisin forgets to strip volatile :/

https://gcc.gnu.org/ml/gcc-patches/2013-11/msg01378.html

Now, looking at the current GCC source:

https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707

it seems that __typeof__() is supposed to strip all qualifiers from
_Atomic types. That lead me to try:

typeof(_Atomic typeof(p)) __p = (p);

But alas, I still get the same junk you got for ool_store_release() :/

2019-12-13 10:48:03

by Luc Van Oostenryck

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 09:53:38PM +0100, Peter Zijlstra wrote:
> Now, looking at the current GCC source:
>
> https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707
>
> it seems that __typeof__() is supposed to strip all qualifiers from
> _Atomic types. That lead me to try:
>
> typeof(_Atomic typeof(p)) __p = (p);
>
> But alas, I still get the same junk you got for ool_store_release() :/

I was checking this to see if Sparse was ready to support this.
I was a bit surprised because at first sigth GCC was doing as
it claims (typeof striping const & volatile on _Atomic types)
but your exampe wasn't working. But it's working if an
intermediate var is used:
_Atomic typeof(p) tmp;
typeof(tmp) __p = (p);
or, uglier but probably more practical:
typeof(({_Atomic typeof(p) tmp; })) __p = (p);

Go figure!

OTOH, at least on GCC 8.3, it seems to always do the same with
volatiles than it does with consts.

-- Luc Van Oostenryck

2019-12-13 12:10:12

by Michael Ellerman

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

Peter Zijlstra <[email protected]> writes:
> On Thu, Dec 12, 2019 at 10:07:56AM +0000, Will Deacon wrote:
>
>> > So your proposed change _should_ be fine. Will, I'm assuming you never
>> > saw this on your ARGH64 builds when you did this code ?
>>
>> I did see it, but (a) looking at the code out-of-line makes it look a lot
>> worse than it actually is (so the ext4 example is really helpful -- thanks
>> Michael!) and (b) I chalked it up to a crappy compiler.
>>
>> However, see this comment from Arnd on my READ_ONCE series from the other
>> day:
>>
>> https://lore.kernel.org/lkml/CAK8P3a0f=WvSQSBQ4t0FmEkcFE_mC3oARxaeTviTSkSa-D2qhg@mail.gmail.com
>>
>> In which case, I'm thinking that we should be doing better in READ_ONCE()
>> for non-buggy compilers which would also keep the KCSAN folks happy for this
>> code (and would help with [1] too).
>
> So something like this then? Although I suppose that should be moved
> into compiler-gcc.h and then guarded by #ifndef READ_ONCE or so.

I tried this:

> @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
> */
> #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
>
> +#else /* GCC_VERSION < 40800 */
> +
> +#define READ_ONCE_NOCHECK(x) \
> +({ \
> + typeof(x) __x = *(volatile typeof(x))&(x); \

Didn't compile, needed:

typeof(x) __x = *(volatile typeof(&x))&(x); \


> + smp_read_barrier_depends(); \
> + __x;
> +})


And that works for me. No extra stack check stuff.

I guess the question is does that version of READ_ONCE() implement the
read once semantics. Do we have a good way to test that?

The only differences are because of the early return in the generic
test_and_set_bit_lock():

1 <ext4_resize_begin_generic>: 1 <ext4_resize_begin_ppc>:
2 addis r2,r12,281 2 addis r2,r12,281
3 addi r2,r2,-22368 3 addi r2,r2,-22064
4 mflr r0 4 mflr r0
5 bl <_mcount> 5 bl <_mcount>
6 mflr r0 6 mflr r0
7 std r31,-8(r1) 7 std r31,-8(r1)
8 std r30,-16(r1) 8 std r30,-16(r1)
9 mr r31,r3 9 mr r31,r3
10 li r3,24 10 li r3,24
11 std r0,16(r1) 11 std r0,16(r1)
12 stdu r1,-128(r1) 12 stdu r1,-112(r1)
13 ld r30,920(r31) 13 ld r30,920(r31)
14 bl <capable+0x8> 14 bl <capable+0x8>
15 nop 15 nop
16 cmpdi cr7,r3,0 16 cmpdi cr7,r3,0
17 beq cr7,<ext4_resize_begin_generic+0xf0> 17 beq cr7,<ext4_resize_begin_ppc+0xc0>
18 ld r9,920(r31) 18 ld r9,920(r31)
19 ld r10,96(r30) 19 ld r10,96(r30)
20 lwz r7,84(r30) 20 lwz r7,84(r30)
21 ld r8,104(r9) 21 ld r8,104(r9)
22 ld r10,24(r10) 22 ld r10,24(r10)
23 lwz r8,20(r8) 23 lwz r8,20(r8)
24 srd r10,r10,r7 24 srd r10,r10,r7
25 cmpd cr7,r10,r8 25 cmpd cr7,r10,r8
26 bne cr7,<ext4_resize_begin_generic+0x128> 26 bne cr7,<ext4_resize_begin_ppc+0xf8>
27 lhz r10,160(r9) 27 lhz r10,160(r9)
28 andi. r10,r10,2 28 andi. r10,r10,2
29 bne <ext4_resize_begin_generic+0x100>
30 ld r10,560(r9)
31 std r10,104(r1)
32 ld r10,104(r1)
33 andi. r10,r10,1
34 bne <ext4_resize_begin_generic+0xd0> 29 bne <ext4_resize_begin_ppc+0xd0>
35 addi r7,r9,560 30 addi r9,r9,560
36 li r8,1 31 li r10,1
37 ldarx r10,0,r7 32 ldarx r3,0,r9,1
38 or r6,r8,r10 33 or r8,r3,r10
39 stdcx. r6,0,r7 34 stdcx. r8,0,r9
40 bne- <ext4_resize_begin_generic+0x8c> 35 bne- <ext4_resize_begin_ppc+0x78>
41 isync 36 isync
37 clrldi r3,r3,63
42 andi. r9,r10,1 38 addi r3,r3,-1
43 li r3,0 39 rlwinm r3,r3,0,27,27
44 bne <ext4_resize_begin_generic+0xd0> 40 addi r3,r3,-16
45 addi r1,r1,128 41 addi r1,r1,112
46 ld r0,16(r1) 42 ld r0,16(r1)
47 ld r30,-16(r1) 43 ld r30,-16(r1)
48 ld r31,-8(r1) 44 ld r31,-8(r1)
49 mtlr r0 45 mtlr r0
50 blr 46 blr
51 nop 47 nop
52 nop 48 nop
53 nop 49 nop
54 addi r1,r1,128
55 li r3,-16
56 ld r0,16(r1)
57 ld r30,-16(r1)
58 ld r31,-8(r1)
59 mtlr r0
60 blr
61 nop
62 li r3,-1 50 li r3,-1
63 b <ext4_resize_begin_generic+0xac> 51 b <ext4_resize_begin_ppc+0x9c>
64 nop 52 nop
65 nop 53 nop
66 addis r6,r2,-117 54 addis r6,r2,-117
67 addis r4,r2,-140 55 addis r4,r2,-140
68 mr r3,r31 56 mr r3,r31
69 li r5,146 57 li r5,83
70 addi r6,r6,-32736 58 addi r6,r6,-32736
71 addi r4,r4,3088 59 addi r4,r4,3064
72 bl <__ext4_warning+0x8> 60 bl <__ext4_warning+0x8>
73 nop 61 nop
74 li r3,-1 62 li r3,-1
75 b <ext4_resize_begin_generic+0xac> 63 b <ext4_resize_begin_ppc+0x9c>
76 ld r9,96(r9) 64 ld r9,96(r9)
77 addis r6,r2,-118 65 addis r6,r2,-118
78 addis r4,r2,-140 66 addis r4,r2,-140
79 mr r3,r31 67 mr r3,r31
80 li r5,136 68 li r5,73
81 addi r6,r6,32752 69 addi r6,r6,32752
82 addi r4,r4,3088 70 addi r4,r4,3064
83 ld r7,24(r9) 71 ld r7,24(r9)
84 bl <__ext4_warning+0x8> 72 bl <__ext4_warning+0x8>
85 nop 73 nop
86 li r3,-1 74 li r3,-1
87 b <ext4_resize_begin_generic+0xac> 75 b <ext4_resize_begin_ppc+0x9c>


cheers

2019-12-13 12:57:53

by Peter Zijlstra

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Fri, Dec 13, 2019 at 11:47:06AM +0100, Luc Van Oostenryck wrote:
> On Thu, Dec 12, 2019 at 09:53:38PM +0100, Peter Zijlstra wrote:
> > Now, looking at the current GCC source:
> >
> > https://github.com/gcc-mirror/gcc/blob/97d7270f894395e513667a031a0c309d1819d05e/gcc/c/c-parser.c#L3707
> >
> > it seems that __typeof__() is supposed to strip all qualifiers from
> > _Atomic types. That lead me to try:
> >
> > typeof(_Atomic typeof(p)) __p = (p);
> >
> > But alas, I still get the same junk you got for ool_store_release() :/
>
> I was checking this to see if Sparse was ready to support this.
> I was a bit surprised because at first sigth GCC was doing as
> it claims (typeof striping const & volatile on _Atomic types)
> but your exampe wasn't working. But it's working if an
> intermediate var is used:
> _Atomic typeof(p) tmp;
> typeof(tmp) __p = (p);
> or, uglier but probably more practical:
> typeof(({_Atomic typeof(p) tmp; })) __p = (p);
>
> Go figure!

Excellent! I had to change it to something like:

#define unqual_typeof(x) typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; }))

but that does indeed work!

Now I suppose we should wrap that in a symbol that indicates our
compiler does indeed support _Atomic, otherwise things will come apart.

That is, my gcc-4.6 doesn't seem to have it, while gcc-4.8 does, which
is exactly the range that needs the daft READ_ONCE() construct, how
convenient :/

Something a little like this perhaps?

---

diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
index 7d9cc5ec4971..c389af602da8 100644
--- a/arch/arm64/include/asm/barrier.h
+++ b/arch/arm64/include/asm/barrier.h
@@ -75,9 +75,9 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,

#define __smp_store_release(p, v) \
do { \
- typeof(p) __p = (p); \
- union { typeof(*p) __val; char __c[1]; } __u = \
- { .__val = (__force typeof(*p)) (v) }; \
+ unqual_typeof(p) __p = (p); \
+ union { unqual_typeof(*p) __val; char __c[1]; } __u = \
+ { .__val = (__force unqual_typeof(*p)) (v) }; \
compiletime_assert_atomic_type(*p); \
kasan_check_write(__p, sizeof(*p)); \
switch (sizeof(*p)) { \
@@ -110,8 +110,8 @@ do { \

#define __smp_load_acquire(p) \
({ \
- union { typeof(*p) __val; char __c[1]; } __u; \
- typeof(p) __p = (p); \
+ union { unqual_typeof(*p) __val; char __c[1]; } __u; \
+ unqual_typeof(p) __p = (p); \
compiletime_assert_atomic_type(*p); \
kasan_check_read(__p, sizeof(*p)); \
switch (sizeof(*p)) { \
@@ -141,8 +141,8 @@ do { \

#define smp_cond_load_relaxed(ptr, cond_expr) \
({ \
- typeof(ptr) __PTR = (ptr); \
- typeof(*ptr) VAL; \
+ unqual_typeof(ptr) __PTR = (ptr); \
+ unqual_typeof(*ptr) VAL; \
for (;;) { \
VAL = READ_ONCE(*__PTR); \
if (cond_expr) \
@@ -154,8 +154,8 @@ do { \

#define smp_cond_load_acquire(ptr, cond_expr) \
({ \
- typeof(ptr) __PTR = (ptr); \
- typeof(*ptr) VAL; \
+ unqual_typeof(ptr) __PTR = (ptr); \
+ unqual_typeof(*ptr) VAL; \
for (;;) { \
VAL = smp_load_acquire(__PTR); \
if (cond_expr) \
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 85b28eb80b11..dd5bb055f5ab 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -228,8 +228,8 @@ do { \
*/
#ifndef smp_cond_load_relaxed
#define smp_cond_load_relaxed(ptr, cond_expr) ({ \
- typeof(ptr) __PTR = (ptr); \
- typeof(*ptr) VAL; \
+ unqual_typeof(ptr) __PTR = (ptr); \
+ unqual_typeof(*ptr) VAL; \
for (;;) { \
VAL = READ_ONCE(*__PTR); \
if (cond_expr) \
@@ -250,7 +250,7 @@ do { \
*/
#ifndef smp_cond_load_acquire
#define smp_cond_load_acquire(ptr, cond_expr) ({ \
- typeof(*ptr) _val; \
+ unqual_typeof(*ptr) _val; \
_val = smp_cond_load_relaxed(ptr, cond_expr); \
smp_acquire__after_ctrl_dep(); \
_val; \
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index 0eb2a1cc411d..15fd7ea3882a 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -179,3 +179,10 @@
#endif

#define __no_fgcse __attribute__((optimize("-fno-gcse")))
+
+#if GCC_VERSION < 40800
+/*
+ * GCC-4.6 doesn't support _Atomic, which is required to strip qualifiers.
+ */
+#define unqual_typeof(x) typeof(x)
+#endif
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index ad8c76144a3c..9736993f2ba1 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -279,7 +279,7 @@ void __write_once_size(volatile void *p, void *res, int size)

#define __READ_ONCE(x, check) \
({ \
- union { typeof(x) __val; char __c[1]; } __u; \
+ union { unqual_typeof(x) __val; char __c[1]; } __u; \
if (check) \
__read_once_size(&(x), __u.__c, sizeof(x)); \
else \
@@ -302,12 +302,12 @@ unsigned long read_word_at_a_time(const void *addr)
return *(unsigned long *)addr;
}

-#define WRITE_ONCE(x, val) \
-({ \
- union { typeof(x) __val; char __c[1]; } __u = \
- { .__val = (__force typeof(x)) (val) }; \
- __write_once_size(&(x), __u.__c, sizeof(x)); \
- __u.__val; \
+#define WRITE_ONCE(x, val) \
+({ \
+ union { unqual_typeof(x) __val; char __c[1]; } __u = \
+ { .__val = (__force unqual_typeof(x)) (val) }; \
+ __write_once_size(&(x), __u.__c, sizeof(x)); \
+ __u.__val; \
})

#include <linux/kcsan.h>
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index 72393a8c1a6c..fe8012c54251 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -243,4 +243,11 @@ struct ftrace_likely_data {
#define __diag_error(compiler, version, option, comment) \
__diag_ ## compiler(version, error, option)

+#ifndef unqual_typeof
+/*
+ * GCC __typeof__() strips all qualifiers from _Atomic types.
+ */
+#define unqual_typeof(x) typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; }))
+#endif
+
#endif /* __LINUX_COMPILER_TYPES_H */

2019-12-13 13:18:24

by Arnd Bergmann

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds
<[email protected]> wrote:
> On Thu, Dec 12, 2019 at 11:34 AM Will Deacon <[email protected]> wrote:
> > The root of my concern in all of this, and what started me looking at it in
> > the first place, is the interaction with 'typeof()'. Inheriting 'volatile'
> > for a pointer means that local variables in macros declared using typeof()
> > suddenly start generating *hideous* code, particularly when pointless stack
> > spills get stackprotector all excited.
>
> Yeah, removing volatile can be a bit annoying.
>
> For the particular case of the bitops, though, it's not an issue.
> Since you know the type there, you can just cast it.
>
> And if we had the rule that READ_ONCE() was an arithmetic type, you could do
>
> typeof(0+(*p)) __var;
>
> since you might as well get the integer promotion anyway (on the
> non-volatile result).
>
> But that doesn't work with structures or unions, of course.
>
> I'm not entirely sure we have READ_ONCE() with a struct. I do know we
> have it with 64-bit entities on 32-bit machines, but that's ok with
> the "0+" trick.

I'll have my randconfig builder look for instances, so far I found one,
see below. My feeling is that it would be better to enforce at least
the size being a 1/2/4/8, to avoid cases where someone thinks
the access is atomic, but it falls back on a memcpy.

Arnd

diff --git a/drivers/xen/time.c b/drivers/xen/time.c
index 0968859c29d0..adb492c0aa34 100644
--- a/drivers/xen/time.c
+++ b/drivers/xen/time.c
@@ -64,7 +64,7 @@ static void xen_get_runstate_snapshot_cpu_delta(
do {
state_time = get64(&state->state_entry_time);
rmb(); /* Hypervisor might update data. */
- *res = READ_ONCE(*state);
+ memcpy(res, state, sizeof(*res));
rmb(); /* Hypervisor might update data. */
} while (get64(&state->state_entry_time) != state_time ||
(state_time & XEN_RUNSTATE_UPDATE));
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 5e88e7e33abe..f4ae360efdba 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -179,6 +179,8 @@ void ftrace_likely_update(struct
ftrace_likely_data *f, int val,

#include <uapi/linux/types.h>

+extern void __broken_access_once(void *, const void *, unsigned long);
+
#define __READ_ONCE_SIZE \
({ \
switch (size) { \
@@ -187,9 +189,7 @@ void ftrace_likely_update(struct
ftrace_likely_data *f, int val,
case 4: *(__u32 *)res = *(volatile __u32 *)p; break; \
case 8: *(__u64 *)res = *(volatile __u64 *)p; break; \
default: \
- barrier(); \
- __builtin_memcpy((void *)res, (const void *)p, size); \
- barrier(); \
+ __broken_access_once((void *)res, (const void *)p,
size); \
} \
})

@@ -225,9 +225,7 @@ static __always_inline void
__write_once_size(volatile void *p, void *res, int s
case 4: *(volatile __u32 *)p = *(__u32 *)res; break;
case 8: *(volatile __u64 *)p = *(__u64 *)res; break;
default:
- barrier();
- __builtin_memcpy((void *)p, (const void *)res, size);
- barrier();
+ __broken_access_once((void *)p, (const void *)res, size);
}
}

2019-12-13 13:57:33

by Segher Boessenkool

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

Hi!

On Fri, Dec 13, 2019 at 11:07:55PM +1100, Michael Ellerman wrote:
> I tried this:
>
> > @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
> > */
> > #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
> >
> > +#else /* GCC_VERSION < 40800 */
> > +
> > +#define READ_ONCE_NOCHECK(x) \
> > +({ \
> > + typeof(x) __x = *(volatile typeof(x))&(x); \
>
> Didn't compile, needed:
>
> typeof(x) __x = *(volatile typeof(&x))&(x); \
>
>
> > + smp_read_barrier_depends(); \
> > + __x;
> > +})
>
>
> And that works for me. No extra stack check stuff.
>
> I guess the question is does that version of READ_ONCE() implement the
> read once semantics. Do we have a good way to test that?
>
> The only differences are because of the early return in the generic
> test_and_set_bit_lock():

No, there is another difference:

> 30 ld r10,560(r9)
> 31 std r10,104(r1)
> 32 ld r10,104(r1)
> 33 andi. r10,r10,1
> 34 bne <ext4_resize_begin_generic+0xd0> 29 bne <ext4_resize_begin_ppc+0xd0>

The stack var is volatile, so it is read back immediately after writing
it, here. This is a bad idea for performance, in general.


Segher

2019-12-13 14:29:48

by Luc Van Oostenryck

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Fri, Dec 13, 2019 at 01:56:18PM +0100, Peter Zijlstra wrote:
>
> Excellent! I had to change it to something like:
>
> #define unqual_typeof(x) typeof(({_Atomic typeof(x) ___x __maybe_unused; ___x; }))
>
> but that does indeed work!
>
> Now I suppose we should wrap that in a symbol that indicates our
> compiler does indeed support _Atomic, otherwise things will come apart.
>
> That is, my gcc-4.6 doesn't seem to have it, while gcc-4.8 does, which
> is exactly the range that needs the daft READ_ONCE() construct, how
> convenient :/
>
> Something a little like this perhaps?

Yes, this looks good to me.
Just a small nit here below.

> ---
>
> diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
> index 7d9cc5ec4971..c389af602da8 100644
> --- a/arch/arm64/include/asm/barrier.h
> +++ b/arch/arm64/include/asm/barrier.h
> @@ -75,9 +75,9 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,
>
> #define __smp_store_release(p, v) \
> do { \
> - typeof(p) __p = (p); \
> - union { typeof(*p) __val; char __c[1]; } __u = \
> - { .__val = (__force typeof(*p)) (v) }; \
> + unqual_typeof(p) __p = (p); \
> + union { unqual_typeof(*p) __val; char __c[1]; } __u = \
> + { .__val = (__force unqual_typeof(*p)) (v) }; \

The 2 two trailing backslashes are now off by one tab.

-- Luc

2019-12-13 21:08:25

by Michael Ellerman

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

Segher Boessenkool <[email protected]> writes:
> Hi!
>
> On Fri, Dec 13, 2019 at 11:07:55PM +1100, Michael Ellerman wrote:
>> I tried this:
>>
>> > @@ -295,6 +296,23 @@ void __write_once_size(volatile void *p, void *res, int size)
>> > */
>> > #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
>> >
>> > +#else /* GCC_VERSION < 40800 */
>> > +
>> > +#define READ_ONCE_NOCHECK(x) \
>> > +({ \
>> > + typeof(x) __x = *(volatile typeof(x))&(x); \
>>
>> Didn't compile, needed:
>>
>> typeof(x) __x = *(volatile typeof(&x))&(x); \
>>
>>
>> > + smp_read_barrier_depends(); \
>> > + __x;
>> > +})
>>
>>
>> And that works for me. No extra stack check stuff.
>>
>> I guess the question is does that version of READ_ONCE() implement the
>> read once semantics. Do we have a good way to test that?
>>
>> The only differences are because of the early return in the generic
>> test_and_set_bit_lock():
>
> No, there is another difference:
>
>> 30 ld r10,560(r9)
>> 31 std r10,104(r1)
>> 32 ld r10,104(r1)
>> 33 andi. r10,r10,1
>> 34 bne <ext4_resize_begin_generic+0xd0> 29 bne <ext4_resize_begin_ppc+0xd0>
>
> The stack var is volatile, so it is read back immediately after writing
> it, here. This is a bad idea for performance, in general.

Argh, yuck. Thanks, I shouldn't try to read asm listings at 11pm.

So that just confirms what Will was saying further up the thread about
the volatile pointer, rather than READ_ONCE() per se.

cheers

2019-12-13 21:35:26

by Arnd Bergmann

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Fri, Dec 13, 2019 at 2:17 PM Arnd Bergmann <[email protected]> wrote:
>
> On Thu, Dec 12, 2019 at 9:50 PM Linus Torvalds
> <[email protected]> wrote:
> I'll have my randconfig builder look for instances, so far I found one,
> see below. My feeling is that it would be better to enforce at least
> the size being a 1/2/4/8, to avoid cases where someone thinks
> the access is atomic, but it falls back on a memcpy.
>
> Arnd
>
> diff --git a/drivers/xen/time.c b/drivers/xen/time.c
> index 0968859c29d0..adb492c0aa34 100644
> --- a/drivers/xen/time.c
> +++ b/drivers/xen/time.c
> @@ -64,7 +64,7 @@ static void xen_get_runstate_snapshot_cpu_delta(
> do {
> state_time = get64(&state->state_entry_time);
> rmb(); /* Hypervisor might update data. */
> - *res = READ_ONCE(*state);
> + memcpy(res, state, sizeof(*res));
> rmb(); /* Hypervisor might update data. */
> } while (get64(&state->state_entry_time) != state_time ||
> (state_time & XEN_RUNSTATE_UPDATE));


A few hundred randconfig (x86, arm32 and arm64) builds later I
still only found one other instance:

diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index eddae4688862..1c1f33447e96 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -304,7 +304,9 @@ static inline struct xdp_desc
*xskq_validate_desc(struct xsk_queue *q,
struct xdp_rxtx_ring *ring = (struct xdp_rxtx_ring *)q->ring;
unsigned int idx = q->cons_tail & q->ring_mask;

- *desc = READ_ONCE(ring->desc[idx]);
+ barrier();
+ memcpy(desc, &ring->desc[idx], sizeof(*desc));
+ barrier();
if (xskq_is_valid_desc(q, desc, umem))
return desc;

Arnd

2019-12-13 22:04:17

by Linus Torvalds

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Fri, Dec 13, 2019 at 1:33 PM Arnd Bergmann <[email protected]> wrote:
>
> A few hundred randconfig (x86, arm32 and arm64) builds later I
> still only found one other instance:

Just send me the pull request to READ_ONCE() and WRITE_ONCE() be
arithmetic types, and your two trivial fixes, and let's get this over
with.

With that, you can remove the 'volatile' with my simple
'typeof(0+*(p))' trick, and we're all good, and we don't need to worry
about compiler versions either.

I'm willing to take that after the merge window as a "sanity fix".

Linus

2019-12-19 12:12:38

by Will Deacon

[permalink] [raw]
Subject: Re: READ_ONCE() + STACKPROTECTOR_STRONG == :/ (was Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.5-2 tag (topic/kasan-bitops))

On Tue, Dec 17, 2019 at 10:32:35AM -0800, Linus Torvalds wrote:
> On Tue, Dec 17, 2019 at 10:04 AM Linus Torvalds
> <[email protected]> wrote:
> >
> > Let me think about it.
>
> How about we just get rid of the union entirely, and just use
> 'unsigned long' or 'unsigned long long' depending on the size.
>
> Something like the attached patch - it still requires that it be an
> arithmetic type, but now because of the final cast.
>
> But it might still be a cast to a volatile type, of course. Then the
> result will be volatile, but at least now READ_ONCE() won't be taking
> the address of a volatile variable on the stack - does that at least
> fix some of the horrible code generation. Hmm?

Sounds like it according to mpe, but I'll confirm too for arm64.

> This is untested, because I obviously still have the cases of
> structures (page table entries) being accessed once..
>
> Linus

> include/linux/compiler.h | 33 +++++++++++++++++----------------
> 1 file changed, 17 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 5e88e7e33abe..8b4282194f16 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -179,18 +179,18 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
>
> #include <uapi/linux/types.h>
>
> -#define __READ_ONCE_SIZE \
> -({ \
> - switch (size) { \
> - case 1: *(__u8 *)res = *(volatile __u8 *)p; break; \
> - case 2: *(__u16 *)res = *(volatile __u16 *)p; break; \
> - case 4: *(__u32 *)res = *(volatile __u32 *)p; break; \
> - case 8: *(__u64 *)res = *(volatile __u64 *)p; break; \
> - default: \
> - barrier(); \
> - __builtin_memcpy((void *)res, (const void *)p, size); \
> - barrier(); \
> - } \
> +/* "unsigned long" or "unsigned long long" - make it fit in a register if possible */
> +#define __READ_ONCE_TYPE(size) \
> + __typeof__(__builtin_choose_expr(size > sizeof(0UL), 0ULL, 0UL))

Ha, I wondered when '__builtin_choose_expr()' would make an appearance in
this thread! Nice trick. I'll try integrating this with what I have and see
what I run into next.

Back down the rabbit hole...

Will