2020-03-10 22:18:59

by Jesse Brandeburg

[permalink] [raw]
Subject: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

Fix many sparse warnings when building with C=1. These are useless
noise from the bitops.h file and getting rid of them helps devs
make more use of the tools and possibly find real bugs.

When the kernel is compiled with C=1, there are lots of messages like:
arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)

CONST_MASK() is using a signed integer "1" to create the mask which is
later cast to (u8), in order to yield an 8-bit value for the assembly
instructions to use. Simplify the expressions used to clearly indicate
they are working on 8-bit values only, which still keeps sparse happy
without an accidental promotion to a 32 bit integer.

The warning was occurring because certain bitmasks that end with a bit
set next to a natural boundary like 7, 15, 23, 31, end up with a mask
like 0x7f, which then results in sign extension due to the integer
type promotion rules[1]. It was really only "clear_bit" that was
having problems, and it was only on some bit checks that resulted in a
mask like 0xffffff7f being generated after the inversion.

Verified with a test module (see next patch) and assembly inspection
that the patch doesn't introduce any change in generated code.

[1] https://stackoverflow.com/questions/46073295/implicit-type-promotion-rules

Signed-off-by: Jesse Brandeburg <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
---
v6: reworded commit message, enhanced explanation
v5: changed code to use simple AND and XOR, updated commit message
v4: reverse argument order as suggested by David Laight, added reviewed-by
v3: Clean up the header file changes as per peterz.
v2: use correct CC: list
---
arch/x86/include/asm/bitops.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 062cdecb2f24..53f246e9df5a 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)CONST_MASK(nr))
+ : "iq" (CONST_MASK(nr) & 0xff)
: "memory");
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)~CONST_MASK(nr)));
+ : "iq" (CONST_MASK(nr) ^ 0xff));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");

base-commit: 8b614cb8f1dcac8ca77cf4dd85f46ef3055f8238
--
2.24.1


2020-03-10 22:19:38

by Jesse Brandeburg

[permalink] [raw]
Subject: [PATCH v6 2/2] lib: make a test module with set/clear bit

Test some bit clears/sets to make sure assembly doesn't change, and
that the set_bit and clear_bit functions work and don't cause sparse
warnings.

Instruct Kbuild to build this file with extra warning level -Wextra,
to catch new issues, and also doesn't hurt to build with C=1.

This was used to test changes to arch/x86/include/asm/bitops.h.

In particular, sparse (C=1) was very concerned when the last bit
before a natural boundary, like 7, or 31, was being tested, as this
causes sign extension (0xffffff7f) for instance when clearing bit 7.

Recommended usage:
make defconfig
scripts/config -m CONFIG_TEST_BITOPS
make modules_prepare
make C=1 W=1 lib/test_bitops.ko
objdump -S -d lib/test_bitops.ko
insmod lib/test_bitops.ko
rmmod lib/test_bitops.ko
<check dmesg>, there should be no compiler/sparse warnings and no
error messages in log.

Signed-off-by: Jesse Brandeburg <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
---
v5: no change
v4: Slight change to bitops_fun last member, as suggested by Andy, added
his reviewed-by too. Added module load/unload to usage. Added copyright.
v3: Update the test to fail if bits aren't cleared, and make the
test reproduce the original issue without patch 1/2, showing that
the issue is fixed in patch 1/2. Thanks PeterZ!
v2: Correct CC: list
---
lib/Kconfig.debug | 13 +++++++
lib/Makefile | 2 +
lib/test_bitops.c | 60 ++++++++++++++++++++++++++++++
tools/testing/selftests/lib/config | 1 +
4 files changed, 76 insertions(+)
create mode 100644 lib/test_bitops.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 69def4a9df00..61a5d00ea064 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1947,6 +1947,19 @@ config TEST_LKM

If unsure, say N.

+config TEST_BITOPS
+ tristate "Test module for compilation of clear_bit/set_bit operations"
+ depends on m
+ help
+ This builds the "test_bitops" module that is much like the
+ TEST_LKM module except that it does a basic exercise of the
+ clear_bit and set_bit macros to make sure there are no compiler
+ warnings from C=1 sparse checker or -Wextra compilations. It has
+ no dependencies and doesn't run or load unless explicitly requested
+ by name. for example: modprobe test_bitops.
+
+ If unsure, say N.
+
config TEST_VMALLOC
tristate "Test module for stress/performance analysis of vmalloc allocator"
default n
diff --git a/lib/Makefile b/lib/Makefile
index 611872c06926..b18db565b355 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -89,6 +89,8 @@ obj-$(CONFIG_TEST_OBJAGG) += test_objagg.o
obj-$(CONFIG_TEST_STACKINIT) += test_stackinit.o
obj-$(CONFIG_TEST_BLACKHOLE_DEV) += test_blackhole_dev.o
obj-$(CONFIG_TEST_MEMINIT) += test_meminit.o
+obj-$(CONFIG_TEST_BITOPS) += test_bitops.o
+CFLAGS_test_bitops.o += -Werror

obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/

diff --git a/lib/test_bitops.c b/lib/test_bitops.c
new file mode 100644
index 000000000000..fd50b3ae4a14
--- /dev/null
+++ b/lib/test_bitops.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/printk.h>
+
+/* a tiny module only meant to test set/clear_bit */
+
+/* use an enum because thats the most common BITMAP usage */
+enum bitops_fun {
+ BITOPS_4 = 4,
+ BITOPS_7 = 7,
+ BITOPS_11 = 11,
+ BITOPS_31 = 31,
+ BITOPS_88 = 88,
+ BITOPS_LAST = 255,
+ BITOPS_LENGTH = 256
+};
+
+static DECLARE_BITMAP(g_bitmap, BITOPS_LENGTH);
+
+static int __init test_bitops_startup(void)
+{
+ pr_warn("Loaded test module\n");
+ set_bit(BITOPS_4, g_bitmap);
+ set_bit(BITOPS_7, g_bitmap);
+ set_bit(BITOPS_11, g_bitmap);
+ set_bit(BITOPS_31, g_bitmap);
+ set_bit(BITOPS_88, g_bitmap);
+ return 0;
+}
+
+static void __exit test_bitops_unstartup(void)
+{
+ int bit_set;
+
+ clear_bit(BITOPS_4, g_bitmap);
+ clear_bit(BITOPS_7, g_bitmap);
+ clear_bit(BITOPS_11, g_bitmap);
+ clear_bit(BITOPS_31, g_bitmap);
+ clear_bit(BITOPS_88, g_bitmap);
+
+ bit_set = find_first_bit(g_bitmap, BITOPS_LAST);
+ if (bit_set != BITOPS_LAST)
+ pr_err("ERROR: FOUND SET BIT %d\n", bit_set);
+
+ pr_warn("Unloaded test module\n");
+}
+
+module_init(test_bitops_startup);
+module_exit(test_bitops_unstartup);
+
+MODULE_AUTHOR("Jesse Brandeburg <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Bit testing module");
diff --git a/tools/testing/selftests/lib/config b/tools/testing/selftests/lib/config
index 14a77ea4a8da..b80ee3f6e265 100644
--- a/tools/testing/selftests/lib/config
+++ b/tools/testing/selftests/lib/config
@@ -2,3 +2,4 @@ CONFIG_TEST_PRINTF=m
CONFIG_TEST_BITMAP=m
CONFIG_PRIME_NUMBERS=m
CONFIG_TEST_STRSCPY=m
+CONFIG_TEST_BITOPS=m
--
2.24.1

2020-03-11 04:39:23

by Luc Van Oostenryck

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

On Tue, Mar 10, 2020 at 03:17:46PM -0700, Jesse Brandeburg wrote:
> Fix many sparse warnings when building with C=1. These are useless
> noise from the bitops.h file and getting rid of them helps devs
> make more use of the tools and possibly find real bugs.
>
> When the kernel is compiled with C=1, there are lots of messages like:
> arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)

Acked-by: Luc Van Oostenryck <[email protected]>

2020-03-17 19:27:25

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

On Tue, Mar 10, 2020 at 03:17:46PM -0700, Jesse Brandeburg wrote:
> Fix many sparse warnings when building with C=1. These are useless
> noise from the bitops.h file and getting rid of them helps devs
> make more use of the tools and possibly find real bugs.
>
> When the kernel is compiled with C=1, there are lots of messages like:
> arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)
>
> CONST_MASK() is using a signed integer "1" to create the mask which is
> later cast to (u8), in order to yield an 8-bit value for the assembly
> instructions to use. Simplify the expressions used to clearly indicate
> they are working on 8-bit values only, which still keeps sparse happy
> without an accidental promotion to a 32 bit integer.
>
> The warning was occurring because certain bitmasks that end with a bit
> set next to a natural boundary like 7, 15, 23, 31, end up with a mask
> like 0x7f, which then results in sign extension due to the integer
> type promotion rules[1]. It was really only "clear_bit" that was
> having problems, and it was only on some bit checks that resulted in a
> mask like 0xffffff7f being generated after the inversion.
>
> Verified with a test module (see next patch) and assembly inspection
> that the patch doesn't introduce any change in generated code.
>
> [1] https://stackoverflow.com/questions/46073295/implicit-type-promotion-rules
>
> Signed-off-by: Jesse Brandeburg <[email protected]>
> Reviewed-by: Andy Shevchenko <[email protected]>

Acked-by: Peter Zijlstra (Intel) <[email protected]>

Boris, can you make this happen?

> ---
> v6: reworded commit message, enhanced explanation
> v5: changed code to use simple AND and XOR, updated commit message
> v4: reverse argument order as suggested by David Laight, added reviewed-by
> v3: Clean up the header file changes as per peterz.
> v2: use correct CC: list
> ---
> arch/x86/include/asm/bitops.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index 062cdecb2f24..53f246e9df5a 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "orb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" ((u8)CONST_MASK(nr))
> + : "iq" (CONST_MASK(nr) & 0xff)
> : "memory");
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "andb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" ((u8)~CONST_MASK(nr)));
> + : "iq" (CONST_MASK(nr) ^ 0xff));
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
>
> base-commit: 8b614cb8f1dcac8ca77cf4dd85f46ef3055f8238
> --
> 2.24.1
>

2020-03-18 22:00:55

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: x86/asm] x86: Fix bitops.h warning with a moved cast

The following commit has been merged into the x86/asm branch of tip:

Commit-ID: 1651e700664b4597ddf4f8adfe435252a0d11277
Gitweb: https://git.kernel.org/tip/1651e700664b4597ddf4f8adfe435252a0d11277
Author: Jesse Brandeburg <[email protected]>
AuthorDate: Tue, 10 Mar 2020 15:17:46 -07:00
Committer: Borislav Petkov <[email protected]>
CommitterDate: Wed, 18 Mar 2020 12:30:19 +01:00

x86: Fix bitops.h warning with a moved cast

Fix many sparse warnings when building with C=1. These are useless noise
from the bitops.h file and getting rid of them helps developers make
more use of the tools and possibly find real bugs.

When the kernel is compiled with C=1, there are lots of messages like:

arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)

CONST_MASK() is using a signed integer "1" to create the mask which is
later cast to (u8), in order to yield an 8-bit value for the assembly
instructions to use. Simplify the expressions used to clearly indicate
they are working on 8-bit values only, which still keeps sparse happy
without an accidental promotion to a 32 bit integer.

The warning was occurring because certain bitmasks that end with a bit
set next to a natural boundary like 7, 15, 23, 31, end up with a mask
like 0x7f, which then results in sign extension due to the integer type
promotion rules[1]. It was really only clear_bit() that was having
problems, and it was only on some bit checks that resulted in a mask
like 0xffffff7f being generated after the inversion.

Verify with a test module (see next patch) and assembly inspection that
the fix doesn't introduce any change in generated code.

[ bp: Massage. ]

Signed-off-by: Jesse Brandeburg <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Acked-by: Luc Van Oostenryck <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://stackoverflow.com/questions/46073295/implicit-type-promotion-rules [1]
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/include/asm/bitops.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 062cdec..53f246e 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)CONST_MASK(nr))
+ : "iq" (CONST_MASK(nr) & 0xff)
: "memory");
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)~CONST_MASK(nr)));
+ : "iq" (CONST_MASK(nr) ^ 0xff));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");

2020-04-16 00:11:40

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v6 2/2] lib: make a test module with set/clear bit

On Tue, Mar 10, 2020 at 03:17:47PM -0700, Jesse Brandeburg wrote:
> Test some bit clears/sets to make sure assembly doesn't change, and
> that the set_bit and clear_bit functions work and don't cause sparse
> warnings.
>
> Instruct Kbuild to build this file with extra warning level -Wextra,
> to catch new issues, and also doesn't hurt to build with C=1.
>
> This was used to test changes to arch/x86/include/asm/bitops.h.
>
> In particular, sparse (C=1) was very concerned when the last bit
> before a natural boundary, like 7, or 31, was being tested, as this
> causes sign extension (0xffffff7f) for instance when clearing bit 7.
>
> Recommended usage:
> make defconfig
> scripts/config -m CONFIG_TEST_BITOPS
> make modules_prepare
> make C=1 W=1 lib/test_bitops.ko
> objdump -S -d lib/test_bitops.ko
> insmod lib/test_bitops.ko
> rmmod lib/test_bitops.ko
> <check dmesg>, there should be no compiler/sparse warnings and no
> error messages in log.

Seems this didn't make the kernel. Perhaps you need to send it to Andrew Morton.

--
With Best Regards,
Andy Shevchenko


2020-05-04 19:49:10

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

On Tue, Mar 10, 2020 at 03:17:46PM -0700, Jesse Brandeburg wrote:
> Fix many sparse warnings when building with C=1. These are useless
> noise from the bitops.h file and getting rid of them helps devs
> make more use of the tools and possibly find real bugs.
>
> When the kernel is compiled with C=1, there are lots of messages like:
> arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)
>
> CONST_MASK() is using a signed integer "1" to create the mask which is
> later cast to (u8), in order to yield an 8-bit value for the assembly
> instructions to use. Simplify the expressions used to clearly indicate
> they are working on 8-bit values only, which still keeps sparse happy
> without an accidental promotion to a 32 bit integer.
>
> The warning was occurring because certain bitmasks that end with a bit
> set next to a natural boundary like 7, 15, 23, 31, end up with a mask
> like 0x7f, which then results in sign extension due to the integer
> type promotion rules[1]. It was really only "clear_bit" that was
> having problems, and it was only on some bit checks that resulted in a
> mask like 0xffffff7f being generated after the inversion.
>
> Verified with a test module (see next patch) and assembly inspection
> that the patch doesn't introduce any change in generated code.
>
> [1] https://stackoverflow.com/questions/46073295/implicit-type-promotion-rules
>
> Signed-off-by: Jesse Brandeburg <[email protected]>
> Reviewed-by: Andy Shevchenko <[email protected]>
> ---
> v6: reworded commit message, enhanced explanation
> v5: changed code to use simple AND and XOR, updated commit message
> v4: reverse argument order as suggested by David Laight, added reviewed-by
> v3: Clean up the header file changes as per peterz.
> v2: use correct CC: list
> ---
> arch/x86/include/asm/bitops.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index 062cdecb2f24..53f246e9df5a 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "orb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" ((u8)CONST_MASK(nr))
> + : "iq" (CONST_MASK(nr) & 0xff)

Sorry for the very late report. It turns out that if your config
tickles __builtin_constant_p just right, this now produces invalid
assembly:

$ cat foo.c
long a(long b, long c) {
asm("orb\t%1, %0" : "+q"(c): "r"(b));
return c;
}
$ gcc foo.c
foo.c: Assembler messages:
foo.c:2: Error: `%rax' not allowed with `orb'

The "q" constraint only has meanting on -m32 otherwise is treated as
"r".

Since we have the mask (& 0xff), can we drop the `b` suffix from the
instruction? Or is a revert more appropriate? Or maybe another way to
skin this cat?

Link: https://github.com/ClangBuiltLinux/linux/issues/961
This is blowing up our KernelCI reports.

> : "memory");
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "andb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" ((u8)~CONST_MASK(nr)));
> + : "iq" (CONST_MASK(nr) ^ 0xff));
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
>
> base-commit: 8b614cb8f1dcac8ca77cf4dd85f46ef3055f8238
> --
> 2.24.1
>

2020-05-05 01:17:53

by Jesse Brandeburg

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

On Mon, 4 May 2020 12:51:12 -0700
Nick Desaulniers <[email protected]> wrote:

> Sorry for the very late report. It turns out that if your config
> tickles __builtin_constant_p just right, this now produces invalid
> assembly:
>
> $ cat foo.c
> long a(long b, long c) {
> asm("orb\t%1, %0" : "+q"(c): "r"(b));
> return c;
> }
> $ gcc foo.c
> foo.c: Assembler messages:
> foo.c:2: Error: `%rax' not allowed with `orb'
>
> The "q" constraint only has meanting on -m32 otherwise is treated as
> "r".
>
> Since we have the mask (& 0xff), can we drop the `b` suffix from the
> instruction? Or is a revert more appropriate? Or maybe another way to
> skin this cat?

Figures that such a small change can create problems :-( Sorry for the
thrash!

The patches in the link below basically add back the cast, but I'm
interested to see if any others can come up with a better fix that
a) passes the above code generation test
b) still keeps sparse happy
c) passes the test module and the code inspection

If need be I'm OK with a revert of the original patch to fix the issue
in the short term, but it seems to me there must be a way to satisfy
both tools. We went through several iterations on the way to the final
patch that we might be able to pluck something useful from.

> Link: https://github.com/ClangBuiltLinux/linux/issues/961
> This is blowing up our KernelCI reports.

ASIDE:
Bummer, how come none of those KernelCI reports are part of
zero-day testing at https://01.org/lkp/documentation/0-day-test-service
I'm interested in your answer but don't want to pollute this thread,
feel free to contact me directly for this one or start a new thread?

2020-05-05 15:16:46

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

On Mon, May 04, 2020 at 06:14:43PM -0700, Jesse Brandeburg wrote:
> On Mon, 4 May 2020 12:51:12 -0700
> Nick Desaulniers <[email protected]> wrote:
>
> > Sorry for the very late report. It turns out that if your config
> > tickles __builtin_constant_p just right, this now produces invalid
> > assembly:
> >
> > $ cat foo.c
> > long a(long b, long c) {
> > asm("orb\t%1, %0" : "+q"(c): "r"(b));
> > return c;
> > }
> > $ gcc foo.c
> > foo.c: Assembler messages:
> > foo.c:2: Error: `%rax' not allowed with `orb'
> >
> > The "q" constraint only has meanting on -m32 otherwise is treated as
> > "r".
> >
> > Since we have the mask (& 0xff), can we drop the `b` suffix from the
> > instruction? Or is a revert more appropriate? Or maybe another way to
> > skin this cat?
>
> Figures that such a small change can create problems :-( Sorry for the
> thrash!
>
> The patches in the link below basically add back the cast, but I'm
> interested to see if any others can come up with a better fix that
> a) passes the above code generation test
> b) still keeps sparse happy
> c) passes the test module and the code inspection
>
> If need be I'm OK with a revert of the original patch to fix the issue
> in the short term, but it seems to me there must be a way to satisfy
> both tools. We went through several iterations on the way to the final
> patch that we might be able to pluck something useful from.

For me the below seems to work:


diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index b392571c1f1d1..139122e5b25b1 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" (CONST_MASK(nr) & 0xff)
+ : "iq" ((u8)(CONST_MASK(nr) & 0xff))
: "memory");
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" (CONST_MASK(nr) ^ 0xff));
+ : "iq" ((u8)(CONST_MASK(nr) ^ 0xff)));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");


--
With Best Regards,
Andy Shevchenko


2020-05-05 17:31:21

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

On Tue, May 5, 2020 at 8:14 AM Andy Shevchenko
<[email protected]> wrote:
>
> On Mon, May 04, 2020 at 06:14:43PM -0700, Jesse Brandeburg wrote:
> > On Mon, 4 May 2020 12:51:12 -0700
> > Nick Desaulniers <[email protected]> wrote:
> >
> > > Sorry for the very late report. It turns out that if your config
> > > tickles __builtin_constant_p just right, this now produces invalid
> > > assembly:
> > >
> > > $ cat foo.c
> > > long a(long b, long c) {
> > > asm("orb\t%1, %0" : "+q"(c): "r"(b));
> > > return c;
> > > }
> > > $ gcc foo.c
> > > foo.c: Assembler messages:
> > > foo.c:2: Error: `%rax' not allowed with `orb'
> > >
> > > The "q" constraint only has meanting on -m32 otherwise is treated as
> > > "r".
> > >
> > > Since we have the mask (& 0xff), can we drop the `b` suffix from the
> > > instruction? Or is a revert more appropriate? Or maybe another way to
> > > skin this cat?
> >
> > Figures that such a small change can create problems :-( Sorry for the
> > thrash!
> >
> > The patches in the link below basically add back the cast, but I'm
> > interested to see if any others can come up with a better fix that
> > a) passes the above code generation test
> > b) still keeps sparse happy
> > c) passes the test module and the code inspection
> >
> > If need be I'm OK with a revert of the original patch to fix the issue
> > in the short term, but it seems to me there must be a way to satisfy
> > both tools. We went through several iterations on the way to the final
> > patch that we might be able to pluck something useful from.
>
> For me the below seems to work:

Yep:
https://github.com/ClangBuiltLinux/linux/issues/961#issuecomment-623785987
https://github.com/ClangBuiltLinux/linux/issues/961#issuecomment-624162497
Sedat wrote the same patch 22 days ago; I didn't notice before
starting this thread. I will sign off on his patch, add your
Suggested by tag, and send shortly.

>
>
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index b392571c1f1d1..139122e5b25b1 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "orb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" (CONST_MASK(nr) & 0xff)
> + : "iq" ((u8)(CONST_MASK(nr) & 0xff))
> : "memory");
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "andb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" (CONST_MASK(nr) ^ 0xff));
> + : "iq" ((u8)(CONST_MASK(nr) ^ 0xff)));
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
>
>
> --
> With Best Regards,
> Andy Shevchenko
>
>


--
Thanks,
~Nick Desaulniers

2020-05-05 17:51:35

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

On Tue, May 5, 2020 at 10:29 AM Nick Desaulniers
<[email protected]> wrote:
>
> On Tue, May 5, 2020 at 8:14 AM Andy Shevchenko
> <[email protected]> wrote:
> >
> > On Mon, May 04, 2020 at 06:14:43PM -0700, Jesse Brandeburg wrote:
> > > On Mon, 4 May 2020 12:51:12 -0700
> > > Nick Desaulniers <[email protected]> wrote:
> > >
> > > > Sorry for the very late report. It turns out that if your config
> > > > tickles __builtin_constant_p just right, this now produces invalid
> > > > assembly:
> > > >
> > > > $ cat foo.c
> > > > long a(long b, long c) {
> > > > asm("orb\t%1, %0" : "+q"(c): "r"(b));
> > > > return c;
> > > > }
> > > > $ gcc foo.c
> > > > foo.c: Assembler messages:
> > > > foo.c:2: Error: `%rax' not allowed with `orb'
> > > >
> > > > The "q" constraint only has meanting on -m32 otherwise is treated as
> > > > "r".
> > > >
> > > > Since we have the mask (& 0xff), can we drop the `b` suffix from the
> > > > instruction? Or is a revert more appropriate? Or maybe another way to
> > > > skin this cat?
> > >
> > > Figures that such a small change can create problems :-( Sorry for the
> > > thrash!
> > >
> > > The patches in the link below basically add back the cast, but I'm
> > > interested to see if any others can come up with a better fix that
> > > a) passes the above code generation test
> > > b) still keeps sparse happy
> > > c) passes the test module and the code inspection
> > >
> > > If need be I'm OK with a revert of the original patch to fix the issue
> > > in the short term, but it seems to me there must be a way to satisfy
> > > both tools. We went through several iterations on the way to the final
> > > patch that we might be able to pluck something useful from.
> >
> > For me the below seems to work:
>
> Yep:
> https://github.com/ClangBuiltLinux/linux/issues/961#issuecomment-623785987
> https://github.com/ClangBuiltLinux/linux/issues/961#issuecomment-624162497
> Sedat wrote the same patch 22 days ago; I didn't notice before
> starting this thread. I will sign off on his patch, add your
> Suggested by tag, and send shortly.

Started a new thread:
https://lore.kernel.org/lkml/[email protected]/T/#u

>
> >
> >
> > diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> > index b392571c1f1d1..139122e5b25b1 100644
> > --- a/arch/x86/include/asm/bitops.h
> > +++ b/arch/x86/include/asm/bitops.h
> > @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> > if (__builtin_constant_p(nr)) {
> > asm volatile(LOCK_PREFIX "orb %1,%0"
> > : CONST_MASK_ADDR(nr, addr)
> > - : "iq" (CONST_MASK(nr) & 0xff)
> > + : "iq" ((u8)(CONST_MASK(nr) & 0xff))
> > : "memory");
> > } else {
> > asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> > @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> > if (__builtin_constant_p(nr)) {
> > asm volatile(LOCK_PREFIX "andb %1,%0"
> > : CONST_MASK_ADDR(nr, addr)
> > - : "iq" (CONST_MASK(nr) ^ 0xff));
> > + : "iq" ((u8)(CONST_MASK(nr) ^ 0xff)));
> > } else {
> > asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> > : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
> >
> >
> > --
> > With Best Regards,
> > Andy Shevchenko
> >
> >
>
>
> --
> Thanks,
> ~Nick Desaulniers



--
Thanks,
~Nick Desaulniers