2020-02-22 00:03:05

by Jesse Brandeburg

[permalink] [raw]
Subject: [PATCH v4 1/2] x86: fix bitops.h warning with a moved cast

Fix many sparse warnings when building with C=1.

When the kernel is compiled with C=1, there are lots of messages like:
arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)

CONST_MASK() is using a signed integer "1" to create the mask which
is later cast to (u8) when used. Move the cast to the definition and
clean up the calling sites to prevent sparse from warning.

The reason the warning was occurring is because certain bitmasks that
end with a mask next to a natural boundary like 7, 15, 23, 31, end up
with a mask like 0x7f, which then results in sign extension when doing
an invert (but I'm not a compiler expert). It was really only
"clear_bit" that was having problems, and it was only on bit checks next
to a byte boundary (top bit).

Verified with a test module (see next patch) and assembly inspection
that the patch doesn't introduce any change in generated code.

Signed-off-by: Jesse Brandeburg <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
---
v4: reverse argument order as suggested by David Laight, added reviewed-by
v3: Clean up the header file changes as per peterz.
v2: use correct CC: list
---
arch/x86/include/asm/bitops.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 062cdecb2f24..fed152434ed0 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -46,7 +46,7 @@
* a mask operation on a byte.
*/
#define CONST_MASK_ADDR(nr, addr) WBYTE_ADDR((void *)(addr) + ((nr)>>3))
-#define CONST_MASK(nr) (1 << ((nr) & 7))
+#define CONST_MASK(nr) ((u8)1 << ((nr) & 7))

static __always_inline void
arch_set_bit(long nr, volatile unsigned long *addr)
@@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "orb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)CONST_MASK(nr))
+ : "iq" (CONST_MASK(nr))
: "memory");
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
@@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
if (__builtin_constant_p(nr)) {
asm volatile(LOCK_PREFIX "andb %1,%0"
: CONST_MASK_ADDR(nr, addr)
- : "iq" ((u8)~CONST_MASK(nr)));
+ : "iq" (CONST_MASK(nr) ^ 0xff));
} else {
asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
: : RLONG_ADDR(addr), "Ir" (nr) : "memory");

base-commit: ca7e1fd1026c5af6a533b4b5447e1d2f153e28f2
--
2.24.1


2020-02-22 00:03:05

by Jesse Brandeburg

[permalink] [raw]
Subject: [PATCH v4 2/2] lib: make a test module with set/clear bit

Test some bit clears/sets to make sure assembly doesn't change, and
that the set_bit and clear_bit functions work and don't cause sparse
warnings.

Instruct Kbuild to build this file with extra warning level -Wextra,
to catch new issues, and also doesn't hurt to build with C=1.

This was used to test changes to arch/x86/include/asm/bitops.h.

In particular, sparse (C=1) was very concerned when the last bit
before a natural boundary, like 7, or 31, was being tested, as this
causes sign extension (0xffffff7f) for instance when clearing bit 7.

Recommended usage:
make defconfig
scripts/config -m CONFIG_TEST_BITOPS
make modules_prepare
make C=1 W=1 lib/test_bitops.ko
objdump -S -d lib/test_bitops.ko
insmod lib/test_bitops.ko
rmmod lib/test_bitops.ko
<check dmesg>, there should be no compiler/sparse warnings and no
error messages in log.

Signed-off-by: Jesse Brandeburg <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
---
v4: Slight change to bitops_fun last member, as suggested by Andy, added
his reviewed-by too. Added module load/unload to usage. Added copyright.
v3: Update the test to fail if bits aren't cleared, and make the
test reproduce the original issue without patch 1/2, showing that
the issue is fixed in patch 1/2. Thanks PeterZ!
v2: Correct CC: list
---
lib/Kconfig.debug | 13 +++++++
lib/Makefile | 2 +
lib/test_bitops.c | 60 ++++++++++++++++++++++++++++++
tools/testing/selftests/lib/config | 1 +
4 files changed, 76 insertions(+)
create mode 100644 lib/test_bitops.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 69def4a9df00..61a5d00ea064 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1947,6 +1947,19 @@ config TEST_LKM

If unsure, say N.

+config TEST_BITOPS
+ tristate "Test module for compilation of clear_bit/set_bit operations"
+ depends on m
+ help
+ This builds the "test_bitops" module that is much like the
+ TEST_LKM module except that it does a basic exercise of the
+ clear_bit and set_bit macros to make sure there are no compiler
+ warnings from C=1 sparse checker or -Wextra compilations. It has
+ no dependencies and doesn't run or load unless explicitly requested
+ by name. for example: modprobe test_bitops.
+
+ If unsure, say N.
+
config TEST_VMALLOC
tristate "Test module for stress/performance analysis of vmalloc allocator"
default n
diff --git a/lib/Makefile b/lib/Makefile
index 611872c06926..b18db565b355 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -89,6 +89,8 @@ obj-$(CONFIG_TEST_OBJAGG) += test_objagg.o
obj-$(CONFIG_TEST_STACKINIT) += test_stackinit.o
obj-$(CONFIG_TEST_BLACKHOLE_DEV) += test_blackhole_dev.o
obj-$(CONFIG_TEST_MEMINIT) += test_meminit.o
+obj-$(CONFIG_TEST_BITOPS) += test_bitops.o
+CFLAGS_test_bitops.o += -Werror

obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/

diff --git a/lib/test_bitops.c b/lib/test_bitops.c
new file mode 100644
index 000000000000..fd50b3ae4a14
--- /dev/null
+++ b/lib/test_bitops.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/printk.h>
+
+/* a tiny module only meant to test set/clear_bit */
+
+/* use an enum because thats the most common BITMAP usage */
+enum bitops_fun {
+ BITOPS_4 = 4,
+ BITOPS_7 = 7,
+ BITOPS_11 = 11,
+ BITOPS_31 = 31,
+ BITOPS_88 = 88,
+ BITOPS_LAST = 255,
+ BITOPS_LENGTH = 256
+};
+
+static DECLARE_BITMAP(g_bitmap, BITOPS_LENGTH);
+
+static int __init test_bitops_startup(void)
+{
+ pr_warn("Loaded test module\n");
+ set_bit(BITOPS_4, g_bitmap);
+ set_bit(BITOPS_7, g_bitmap);
+ set_bit(BITOPS_11, g_bitmap);
+ set_bit(BITOPS_31, g_bitmap);
+ set_bit(BITOPS_88, g_bitmap);
+ return 0;
+}
+
+static void __exit test_bitops_unstartup(void)
+{
+ int bit_set;
+
+ clear_bit(BITOPS_4, g_bitmap);
+ clear_bit(BITOPS_7, g_bitmap);
+ clear_bit(BITOPS_11, g_bitmap);
+ clear_bit(BITOPS_31, g_bitmap);
+ clear_bit(BITOPS_88, g_bitmap);
+
+ bit_set = find_first_bit(g_bitmap, BITOPS_LAST);
+ if (bit_set != BITOPS_LAST)
+ pr_err("ERROR: FOUND SET BIT %d\n", bit_set);
+
+ pr_warn("Unloaded test module\n");
+}
+
+module_init(test_bitops_startup);
+module_exit(test_bitops_unstartup);
+
+MODULE_AUTHOR("Jesse Brandeburg <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Bit testing module");
diff --git a/tools/testing/selftests/lib/config b/tools/testing/selftests/lib/config
index 14a77ea4a8da..b80ee3f6e265 100644
--- a/tools/testing/selftests/lib/config
+++ b/tools/testing/selftests/lib/config
@@ -2,3 +2,4 @@ CONFIG_TEST_PRINTF=m
CONFIG_TEST_BITMAP=m
CONFIG_PRIME_NUMBERS=m
CONFIG_TEST_STRSCPY=m
+CONFIG_TEST_BITOPS=m
--
2.24.1

2020-02-22 09:40:35

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] x86: fix bitops.h warning with a moved cast

On Sat, Feb 22, 2020 at 2:04 AM Jesse Brandeburg
<[email protected]> wrote:
>
> Fix many sparse warnings when building with C=1.
>
> When the kernel is compiled with C=1, there are lots of messages like:
> arch/x86/include/asm/bitops.h:77:37: warning: cast truncates bits from constant value (ffffff7f becomes 7f)
>
> CONST_MASK() is using a signed integer "1" to create the mask which
> is later cast to (u8) when used. Move the cast to the definition and
> clean up the calling sites to prevent sparse from warning.
>
> The reason the warning was occurring is because certain bitmasks that
> end with a mask next to a natural boundary like 7, 15, 23, 31, end up
> with a mask like 0x7f, which then results in sign extension when doing
> an invert (but I'm not a compiler expert). It was really only
> "clear_bit" that was having problems, and it was only on bit checks next
> to a byte boundary (top bit).
>
> Verified with a test module (see next patch) and assembly inspection
> that the patch doesn't introduce any change in generated code.
>
> Signed-off-by: Jesse Brandeburg <[email protected]>
> Reviewed-by: Andy Shevchenko <[email protected]>
> ---
> v4: reverse argument order as suggested by David Laight, added reviewed-by
> v3: Clean up the header file changes as per peterz.
> v2: use correct CC: list
> ---
> arch/x86/include/asm/bitops.h | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> index 062cdecb2f24..fed152434ed0 100644
> --- a/arch/x86/include/asm/bitops.h
> +++ b/arch/x86/include/asm/bitops.h
> @@ -46,7 +46,7 @@
> * a mask operation on a byte.
> */
> #define CONST_MASK_ADDR(nr, addr) WBYTE_ADDR((void *)(addr) + ((nr)>>3))
> -#define CONST_MASK(nr) (1 << ((nr) & 7))
> +#define CONST_MASK(nr) ((u8)1 << ((nr) & 7))
>
> static __always_inline void
> arch_set_bit(long nr, volatile unsigned long *addr)
> @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "orb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" ((u8)CONST_MASK(nr))
> + : "iq" (CONST_MASK(nr))
> : "memory");
> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> if (__builtin_constant_p(nr)) {
> asm volatile(LOCK_PREFIX "andb %1,%0"
> : CONST_MASK_ADDR(nr, addr)
> - : "iq" ((u8)~CONST_MASK(nr)));
> + : "iq" (CONST_MASK(nr) ^ 0xff));

I'm wondering if the original, by Peter Z, order allows us to drop
(u8) casting in the CONST_MASK completely.

> } else {
> asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
>
> base-commit: ca7e1fd1026c5af6a533b4b5447e1d2f153e28f2
> --
> 2.24.1
>


--
With Best Regards,
Andy Shevchenko

2020-02-24 09:55:03

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] x86: fix bitops.h warning with a moved cast

On Sat, Feb 22, 2020 at 11:39:57AM +0200, Andy Shevchenko wrote:
> On Sat, Feb 22, 2020 at 2:04 AM Jesse Brandeburg

> > -#define CONST_MASK(nr) (1 << ((nr) & 7))
> > +#define CONST_MASK(nr) ((u8)1 << ((nr) & 7))
> >
> > static __always_inline void
> > arch_set_bit(long nr, volatile unsigned long *addr)
> > @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> > if (__builtin_constant_p(nr)) {
> > asm volatile(LOCK_PREFIX "orb %1,%0"
> > : CONST_MASK_ADDR(nr, addr)
> > - : "iq" ((u8)CONST_MASK(nr))
> > + : "iq" (CONST_MASK(nr))

Note how this is not equivalent, the old code actually handed in a u8
while the new code hands int. By moving the (u8) cast into the parens,
you casl 1 to u8, which then instantly gets promoted to 'int' due to the
'<<' operator.

> > : "memory");
> > } else {
> > asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> > @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> > if (__builtin_constant_p(nr)) {
> > asm volatile(LOCK_PREFIX "andb %1,%0"
> > : CONST_MASK_ADDR(nr, addr)
> > - : "iq" ((u8)~CONST_MASK(nr)));
> > + : "iq" (CONST_MASK(nr) ^ 0xff));
>
> I'm wondering if the original, by Peter Z, order allows us to drop
> (u8) casting in the CONST_MASK completely.

I'm thinking it's all nonsense anyway :-), the result of either << or ^
is always promoted to int anyway.

The sparse complaint was that ~CONST_MASK(nr) had high bits set which
were lost, which is true, but a copmletely stupid warning IMO.

By using 0xff ^ CONST_MASK(nr), those bits will not be set and will not
be lost.

None of that has anything to do with where we place a pointless cast
more or less.

2020-02-24 17:00:58

by Jesse Brandeburg

[permalink] [raw]
Subject: Re: [PATCH v4 1/2] x86: fix bitops.h warning with a moved cast

On Mon, 24 Feb 2020 10:54:02 +0100 Peter wrote:
> On Sat, Feb 22, 2020 at 11:39:57AM +0200, Andy Shevchenko wrote:
> > On Sat, Feb 22, 2020 at 2:04 AM Jesse Brandeburg
>
> > > -#define CONST_MASK(nr) (1 << ((nr) & 7))
> > > +#define CONST_MASK(nr) ((u8)1 << ((nr) & 7))
> > >
> > > static __always_inline void
> > > arch_set_bit(long nr, volatile unsigned long *addr)
> > > @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> > > if (__builtin_constant_p(nr)) {
> > > asm volatile(LOCK_PREFIX "orb %1,%0"
> > > : CONST_MASK_ADDR(nr, addr)
> > > - : "iq" ((u8)CONST_MASK(nr))
> > > + : "iq" (CONST_MASK(nr))
>
> Note how this is not equivalent, the old code actually handed in a u8
> while the new code hands int. By moving the (u8) cast into the parens,
> you casl 1 to u8, which then instantly gets promoted to 'int' due to the
> '<<' operator.

True. Which is why I had decided to use the strongly typed local
variables, which as I recall you mentioned were "sad", so I had tried
to fix it with the simpler changes. Everything is a negotiation with
the compiler and tools here, about how clearly you can communicate the
code's intent and functionality while still keeping it performant.

> > > : "memory");
> > > } else {
> > > asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> > > @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> > > if (__builtin_constant_p(nr)) {
> > > asm volatile(LOCK_PREFIX "andb %1,%0"
> > > : CONST_MASK_ADDR(nr, addr)
> > > - : "iq" ((u8)~CONST_MASK(nr)));
> > > + : "iq" (CONST_MASK(nr) ^ 0xff));
> >
> > I'm wondering if the original, by Peter Z, order allows us to drop
> > (u8) casting in the CONST_MASK completely.
>
> I'm thinking it's all nonsense anyway :-), the result of either << or ^
> is always promoted to int anyway.

Yeah, I realize this is *all* nonsense, but I *do* see value in making
the code not generate sparse warnings, as long as the end result
doesn't generate code change. It allows you to run more tools to find
bugs with less false positives.

> The sparse complaint was that ~CONST_MASK(nr) had high bits set which
> were lost, which is true, but a copmletely stupid warning IMO.
>
> By using 0xff ^ CONST_MASK(nr), those bits will not be set and will not
> be lost.
>
> None of that has anything to do with where we place a pointless cast
> more or less.

Well now that we have the test module, I'll check that the simplest
possible patch of just changing the one line for ~CONST_MASK(nr) to
0xFF ^ CONST_MASK(nr) will fix the issue.