2022-10-25 22:27:11

by Alexander Potapenko

[permalink] [raw]
Subject: [PATCH] x86/uaccess: instrument copy_from_user_nmi()

Make sure usercopy hooks from linux/instrumented.h are invoked for
copy_from_user_nmi().
This fixes KMSAN false positives reported when dumping opcodes for a
stack trace.

Cc: Andrew Morton <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: [email protected]
Signed-off-by: Alexander Potapenko <[email protected]>
---
arch/x86/lib/usercopy.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
index f1bb186171562..24b48af274173 100644
--- a/arch/x86/lib/usercopy.c
+++ b/arch/x86/lib/usercopy.c
@@ -6,6 +6,7 @@

#include <linux/uaccess.h>
#include <linux/export.h>
+#include <linux/instrumented.h>

#include <asm/tlbflush.h>

@@ -44,7 +45,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
* called from other contexts.
*/
pagefault_disable();
+ instrument_copy_from_user_before(to, from, n);
ret = raw_copy_from_user(to, from, n);
+ instrument_copy_from_user_after(to, from, n, ret);
pagefault_enable();

return ret;
--
2.38.0.135.g90850a2211-goog



2022-10-26 09:37:56

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] x86/uaccess: instrument copy_from_user_nmi()

On Wed, Oct 26, 2022 at 12:17:55AM +0200, Alexander Potapenko wrote:
> Make sure usercopy hooks from linux/instrumented.h are invoked for
> copy_from_user_nmi().
> This fixes KMSAN false positives reported when dumping opcodes for a
> stack trace.
>
> Cc: Andrew Morton <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Kees Cook <[email protected]>
> Cc: [email protected]
> Signed-off-by: Alexander Potapenko <[email protected]>
> ---
> arch/x86/lib/usercopy.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
> index f1bb186171562..24b48af274173 100644
> --- a/arch/x86/lib/usercopy.c
> +++ b/arch/x86/lib/usercopy.c
> @@ -6,6 +6,7 @@
>
> #include <linux/uaccess.h>
> #include <linux/export.h>
> +#include <linux/instrumented.h>
>
> #include <asm/tlbflush.h>
>
> @@ -44,7 +45,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
> * called from other contexts.
> */
> pagefault_disable();
> + instrument_copy_from_user_before(to, from, n);
> ret = raw_copy_from_user(to, from, n);
> + instrument_copy_from_user_after(to, from, n, ret);
> pagefault_enable();
>
> return ret;

Is all that instrumentation NMI safe? ISTR having seen locks in some of
that *SAN stuff.

Also did this want:

Fixes: 59298997df89 ("x86/uaccess: avoid check_object_size() in copy_from_user_nmi()")

?

2022-10-27 09:00:15

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] x86/uaccess: instrument copy_from_user_nmi()

On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> A bigger issue from the NMI perspective is probably
> having __msan_poison_alloca() inserted in every non-noinstr kernel
> function, because that hook may acquire the stackdepot lock.

*urgghhh* that's broken, that must not be. There is a *TON* of NMI
functions that are non-noinstr.

What's worse, it seems to do a memory allocation as well, and that's out
the window with PREEMPT_RT where you can't do even GFP_ATOMIC from
regular IRQ context.

That function is wholly unacceptable to be added to every kernel
function.


2022-10-27 19:02:10

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH] x86/uaccess: instrument copy_from_user_nmi()

On Thu, Oct 27, 2022 at 1:05 AM Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> > A bigger issue from the NMI perspective is probably
> > having __msan_poison_alloca() inserted in every non-noinstr kernel
> > function, because that hook may acquire the stackdepot lock.
>
> *urgghhh* that's broken, that must not be. There is a *TON* of NMI
> functions that are non-noinstr.

__msan_poison_alloca() is guarded by kmsan_in_runtime(), which is
currently implemented as:

static __always_inline bool kmsan_in_runtime(void)
{
if ((hardirq_count() >> HARDIRQ_SHIFT) > 1)
return true;
return kmsan_get_context()->kmsan_in_runtime;
}

I think the easiest way to fix the NMI situation would be adding "if
in_nmi() return true"?

Currently that will render kmsan_unpoison_memory() useless in NMI
context, but I think we don't need a check for kmsan_in_runtime()
there, because unpoisoning is self-contained and normally does not
recurse (guess we can tolerate a pr_err() on the rare assertion
violation path?)

> What's worse, it seems to do a memory allocation as well, and that's out
> the window with PREEMPT_RT where you can't do even GFP_ATOMIC from
> regular IRQ context.

Yes, there's a lazy call to alloc_pages() in lib/stackdepot.c that is
done when we run out of storage space.
It would be a pity to ignore everything that is happening inside
regular IRQs (by making kmsan_in_runtime() bail out on in_irq()) - I
think we occasionally see errors from there, e.g.
https://syzkaller.appspot.com/bug?id=233563e79a8e00f86412eb3d2fb4eb1f425e70c3
We could make stackdepot avoid allocating memory in IRQs/NMIs and hope
that calls to __msan_poison_alloca() from regular contexts keep up
with draining the storage from interrupts.
Another option would be to preallocate a very big chunk of memory for
stackdepot and never do allocations again.

These tricks won't however save us from acquiring depot_lock from
lib/stackdepot.c every time we want to create a new origin.
But that should not be a problem by itself, because we always do
kmsan_enter_runtime() before accessing the stack depot - i.e. it won't
be taken recursively.

Given that PREEMPT_RT is not the default at the moment, shall we make
KMSAN incompatible with it for the time being?

> That function is wholly unacceptable to be added to every kernel
> function.
>



--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

2022-10-27 19:12:35

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH] x86/uaccess: instrument copy_from_user_nmi()

On Thu, Oct 27, 2022 at 11:26:50AM -0700, Alexander Potapenko wrote:
> On Thu, Oct 27, 2022 at 1:05 AM Peter Zijlstra <[email protected]> wrote:
> >
> > On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> > > A bigger issue from the NMI perspective is probably
> > > having __msan_poison_alloca() inserted in every non-noinstr kernel
> > > function, because that hook may acquire the stackdepot lock.
> >
> > *urgghhh* that's broken, that must not be. There is a *TON* of NMI
> > functions that are non-noinstr.
>
> __msan_poison_alloca() is guarded by kmsan_in_runtime(), which is
> currently implemented as:
>
> static __always_inline bool kmsan_in_runtime(void)
> {
> if ((hardirq_count() >> HARDIRQ_SHIFT) > 1)
> return true;
> return kmsan_get_context()->kmsan_in_runtime;
> }
>
> I think the easiest way to fix the NMI situation would be adding "if
> in_nmi() return true"?

It might help to look through these threads:

https://lore.kernel.org/lkml/[email protected]/
https://lore.kernel.org/all/[email protected]/

I wandered around attempting to deal with in_nmi(), etc. And in
the end just drop the attempt to cover it. It's worth noting that
copy_from_user_nmi() exists on 1 architecture and has exactly 1
call-site...

--
Kees Cook

2022-10-27 20:37:20

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] x86/uaccess: instrument copy_from_user_nmi()

On Thu, Oct 27, 2022 at 11:58:35AM -0700, Kees Cook wrote:

> I wandered around attempting to deal with in_nmi(), etc. And in
> the end just drop the attempt to cover it. It's worth noting that
> copy_from_user_nmi() exists on 1 architecture and has exactly 1
> call-site...

Yeah, back when I wrote it, it was a lot more complicated because we
could not reliably take #PF from NMI context; it did manual page-walks,
kmap_atomic()s and mempcy(). That's all fixed now and it's really mostly
a rudiment -- except for these instrumentation issues it seems.

2022-10-28 00:00:27

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH] x86/uaccess: instrument copy_from_user_nmi()

On Thu, Oct 27, 2022 at 11:58 AM Kees Cook <[email protected]> wrote:
>
> On Thu, Oct 27, 2022 at 11:26:50AM -0700, Alexander Potapenko wrote:
> > On Thu, Oct 27, 2022 at 1:05 AM Peter Zijlstra <[email protected]> wrote:
> > >
> > > On Wed, Oct 26, 2022 at 11:38:53AM -0700, Alexander Potapenko wrote:
> > > > A bigger issue from the NMI perspective is probably
> > > > having __msan_poison_alloca() inserted in every non-noinstr kernel
> > > > function, because that hook may acquire the stackdepot lock.
> > >
> > > *urgghhh* that's broken, that must not be. There is a *TON* of NMI
> > > functions that are non-noinstr.
> >
> > __msan_poison_alloca() is guarded by kmsan_in_runtime(), which is
> > currently implemented as:
> >
> > static __always_inline bool kmsan_in_runtime(void)
> > {
> > if ((hardirq_count() >> HARDIRQ_SHIFT) > 1)
> > return true;
> > return kmsan_get_context()->kmsan_in_runtime;
> > }
> >
> > I think the easiest way to fix the NMI situation would be adding "if
> > in_nmi() return true"?
>
> It might help to look through these threads:
>
> https://lore.kernel.org/lkml/[email protected]/
> https://lore.kernel.org/all/[email protected]/

Sorry, I missed that letter, should have responded earlier.

> I wandered around attempting to deal with in_nmi(), etc. And in
> the end just drop the attempt to cover it. It's worth noting that
> copy_from_user_nmi() exists on 1 architecture and has exactly 1
> call-site...

It doesn't really matter for KASAN, because a missing addressability
check is a matter of missing some (possibly rare) bugs.
For KMSAN a missing initialization will result in false positives, and
we already started seeing them: show_opcodes() copies data to a local
and prints it, but without a call to kmsan_unpoison_memory() it will
result in error reports about opcodes[] being uninitialized.

So for this particular case I want to ensure kmsan_unpoison_memory()
can be called from NMI context (by removing the kmsan_in_runtime()
check from it), but to be on the safe side we'll also have to do
nothing in __msan_poison_alloca() under in_nmi().


> --
> Kees Cook


--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg