2022-05-02 23:41:09

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 2/3] x86/cpu: Elide KCSAN for cpu_has() and friends

On Mon, May 02, 2022 at 01:07:43PM +0200, Peter Zijlstra wrote:
> vmlinux.o: warning: objtool: enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section
> vmlinux.o: warning: objtool: syscall_enter_from_user_mode+0x28: call to __kcsan_check_access() leaves .noinstr.text section
> vmlinux.o: warning: objtool: syscall_enter_from_user_mode_prepare+0x24: call to __kcsan_check_access() leaves .noinstr.text section
> vmlinux.o: warning: objtool: irqentry_enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section
>
> Reported-by: kernel test robot <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>

An explanation about *why* this fixes it would help, as I have no idea
from looking at the patch.

> ---
> arch/x86/include/asm/cpufeature.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/arch/x86/include/asm/cpufeature.h
> +++ b/arch/x86/include/asm/cpufeature.h
> @@ -51,7 +51,7 @@ extern const char * const x86_power_flag
> extern const char * const x86_bug_flags[NBUGINTS*32];
>
> #define test_cpu_cap(c, bit) \
> - test_bit(bit, (unsigned long *)((c)->x86_capability))
> + arch_test_bit(bit, (unsigned long *)((c)->x86_capability))
>
> /*
> * There are 32 bits/features in each mask word. The high bits
>
>

--
Josh


2022-05-04 08:18:33

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 2/3] x86/cpu: Elide KCSAN for cpu_has() and friends

On Mon, May 02, 2022 at 11:47:47AM -0700, Josh Poimboeuf wrote:
> On Mon, May 02, 2022 at 01:07:43PM +0200, Peter Zijlstra wrote:
> > vmlinux.o: warning: objtool: enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section
> > vmlinux.o: warning: objtool: syscall_enter_from_user_mode+0x28: call to __kcsan_check_access() leaves .noinstr.text section
> > vmlinux.o: warning: objtool: syscall_enter_from_user_mode_prepare+0x24: call to __kcsan_check_access() leaves .noinstr.text section
> > vmlinux.o: warning: objtool: irqentry_enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section
> >
> > Reported-by: kernel test robot <[email protected]>
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
>
> An explanation about *why* this fixes it would help, as I have no idea
> from looking at the patch.

How about something like:

| As x86 uses the <asm-generic/bitops/instrumented-*.h> headers, the
| regular forms of all bitops are instrumented with explicit calls to
| KASAN and KCSAN checks. As these are explicit calls, these are not
| suppressed by the noinstr function attribute.
|
| This can result in calls to those check functions in noinstr code, which
| objtool warns about:
|
| vmlinux.o: warning: objtool: enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section
| vmlinux.o: warning: objtool: syscall_enter_from_user_mode+0x28: call to __kcsan_check_access() leaves .noinstr.text section
| vmlinux.o: warning: objtool: syscall_enter_from_user_mode_prepare+0x24: call to __kcsan_check_access() leaves .noinstr.text section
| vmlinux.o: warning: objtool: irqentry_enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section
|
| Prevent this by using the arch_*() bitops, which are the underlying
| bitops without explciit instrumentation.

Thanks,
Mark.

>
> > ---
> > arch/x86/include/asm/cpufeature.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > --- a/arch/x86/include/asm/cpufeature.h
> > +++ b/arch/x86/include/asm/cpufeature.h
> > @@ -51,7 +51,7 @@ extern const char * const x86_power_flag
> > extern const char * const x86_bug_flags[NBUGINTS*32];
> >
> > #define test_cpu_cap(c, bit) \
> > - test_bit(bit, (unsigned long *)((c)->x86_capability))
> > + arch_test_bit(bit, (unsigned long *)((c)->x86_capability))
> >
> > /*
> > * There are 32 bits/features in each mask word. The high bits
> >
> >
>
> --
> Josh
>