2018-08-27 23:06:07

by Andy Lutomirski

[permalink] [raw]
Subject: [PATCH] x86/nmi: Fix some races in NMI uaccess

In NMI context, we might be in the middle of context switching or in
the middle of switch_mm_irqs_off(). In either case, CR3 might not
match current->mm, which could cause copy_from_user_nmi() and
friends to read the wrong memory.

Fix it by adding a new nmi_uaccess_okay() helper and checking it in
copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.

Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Nadav Amit <[email protected]>
Signed-off-by: Andy Lutomirski <[email protected]>
---

The 0day bot is still chewing on this, but I've tested it a bit locally
and it seems to do the right thing.

I've never observed the bug it fixes, but it does appear to fix a bug
unless I've missed something. It's also a prerequisite for Nadav's
fixmap bugfix.

arch/x86/events/core.c | 2 +-
arch/x86/include/asm/tlbflush.h | 16 ++++++++++++++++
arch/x86/lib/usercopy.c | 5 +++++
arch/x86/mm/tlb.c | 3 +++
4 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 5f4829f10129..dfb2f7c0d019 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2465,7 +2465,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs

perf_callchain_store(entry, regs->ip);

- if (!current->mm)
+ if (!nmi_uaccess_okay())
return;

if (perf_callchain_user32(regs, entry))
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 89a73bc31622..b23b2625793b 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -230,6 +230,22 @@ struct tlb_state {
};
DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);

+/*
+ * Blindly accessing user memory from NMI context can be dangerous
+ * if we're in the middle of switching the current user task or
+ * switching the loaded mm. It can also be dangerous if we
+ * interrupted some kernel code that was temporarily using a
+ * different mm.
+ */
+static inline bool nmi_uaccess_okay(void)
+{
+ struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
+ struct mm_struct *current_mm = current->mm;
+
+ return current_mm && loaded_mm == current_mm &&
+ loaded_mm->pgd == __va(read_cr3_pa());
+}
+
/* Initialize cr4 shadow for this CPU. */
static inline void cr4_init_shadow(void)
{
diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
index c8c6ad0d58b8..3f435d7fca5e 100644
--- a/arch/x86/lib/usercopy.c
+++ b/arch/x86/lib/usercopy.c
@@ -7,6 +7,8 @@
#include <linux/uaccess.h>
#include <linux/export.h>

+#include <asm/tlbflush.h>
+
/*
* We rely on the nested NMI work to allow atomic faults from the NMI path; the
* nested NMI paths are careful to preserve CR2.
@@ -19,6 +21,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
if (__range_not_ok(from, n, TASK_SIZE))
return n;

+ if (!nmi_uaccess_okay())
+ return n;
+
/*
* Even though this function is typically called from NMI/IRQ context
* disable pagefaults so that its behaviour is consistent even when
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 457b281b9339..f4b41d5a93dd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -345,6 +345,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
*/
trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
} else {
+ /* Let NMI code know that CR3 may not match expectations. */
+ this_cpu_write(cpu_tlbstate.loaded_mm, NULL);
+
/* The new ASID is already up to date. */
load_new_mm_cr3(next->pgd, new_asid, false);

--
2.17.1



2018-08-27 23:13:59

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH] x86/nmi: Fix some races in NMI uaccess

On Tue, Aug 28, 2018 at 1:04 AM Andy Lutomirski <[email protected]> wrote:
>
> In NMI context, we might be in the middle of context switching or in
> the middle of switch_mm_irqs_off(). In either case, CR3 might not
> match current->mm, which could cause copy_from_user_nmi() and
> friends to read the wrong memory.
>
> Fix it by adding a new nmi_uaccess_okay() helper and checking it in
> copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.

What about eBPF probes (which I think can be attached to kprobe points
/ tracepoints / perf events) that perform userspace reads / userspace
writes / kernel reads? Can those run in NMI context, and if so, do
they also need special handling?

2018-08-27 23:27:41

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH] x86/nmi: Fix some races in NMI uaccess

On Mon, Aug 27, 2018 at 4:12 PM, Jann Horn <[email protected]> wrote:
> On Tue, Aug 28, 2018 at 1:04 AM Andy Lutomirski <[email protected]> wrote:
>>
>> In NMI context, we might be in the middle of context switching or in
>> the middle of switch_mm_irqs_off(). In either case, CR3 might not
>> match current->mm, which could cause copy_from_user_nmi() and
>> friends to read the wrong memory.
>>
>> Fix it by adding a new nmi_uaccess_okay() helper and checking it in
>> copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.
>
> What about eBPF probes (which I think can be attached to kprobe points
> / tracepoints / perf events) that perform userspace reads / userspace
> writes / kernel reads? Can those run in NMI context, and if so, do
> they also need special handling?

I assume they can run in NMI context, which might be problematic in
and of themselves. For example, does BPF adequately protect against a
BPF program accessing a map while bpf(2) is modifying it? It seems
like bpf_prog_active is intended to serve this purpose.

But I don't see any obvious mechanism for eBPF programs to read user memory.

2018-08-27 23:36:21

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH] x86/nmi: Fix some races in NMI uaccess

On Tue, Aug 28, 2018 at 1:26 AM Andy Lutomirski <[email protected]> wrote:
>
> On Mon, Aug 27, 2018 at 4:12 PM, Jann Horn <[email protected]> wrote:
> > On Tue, Aug 28, 2018 at 1:04 AM Andy Lutomirski <[email protected]> wrote:
> >>
> >> In NMI context, we might be in the middle of context switching or in
> >> the middle of switch_mm_irqs_off(). In either case, CR3 might not
> >> match current->mm, which could cause copy_from_user_nmi() and
> >> friends to read the wrong memory.
> >>
> >> Fix it by adding a new nmi_uaccess_okay() helper and checking it in
> >> copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.
> >
> > What about eBPF probes (which I think can be attached to kprobe points
> > / tracepoints / perf events) that perform userspace reads / userspace
> > writes / kernel reads? Can those run in NMI context, and if so, do
> > they also need special handling?
>
> I assume they can run in NMI context, which might be problematic in
> and of themselves. For example, does BPF adequately protect against a
> BPF program accessing a map while bpf(2) is modifying it? It seems
> like bpf_prog_active is intended to serve this purpose.
>
> But I don't see any obvious mechanism for eBPF programs to read user memory.

Look in kernel/trace/bpf_trace.c, which defines a bunch of eBPF
helpers that can only be called from privileged eBPF code. Ah, but I
misremembered, the userspace write helper does have a guard against
interrupts, just the arbitrary read helper doesn't.

BPF_CALL_3(bpf_probe_read, void *, dst, u32, size, const void *, unsafe_ptr)
{
int ret;

ret = probe_kernel_read(dst, unsafe_ptr, size);
if (unlikely(ret < 0))
memset(dst, 0, size);

return ret;
}
[...]
BPF_CALL_3(bpf_probe_write_user, void *, unsafe_ptr, const void *, src,
u32, size)
{
/*
* Ensure we're in user context which is safe for the helper to
* run. This helper has no business in a kthread.
*
* access_ok() should prevent writing to non-user memory, but in
* some situations (nommu, temporary switch, etc) access_ok() does
* not provide enough validation, hence the check on KERNEL_DS.
*/

if (unlikely(in_interrupt() ||
current->flags & (PF_KTHREAD | PF_EXITING)))
return -EPERM;
if (unlikely(uaccess_kernel()))
return -EPERM;
if (!access_ok(VERIFY_WRITE, unsafe_ptr, size))
return -EPERM;

return probe_kernel_write(unsafe_ptr, src, size);
}

2018-08-28 01:33:12

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH] x86/nmi: Fix some races in NMI uaccess

On Mon, 2018-08-27 at 16:04 -0700, Andy Lutomirski wrote:

> +++ b/arch/x86/mm/tlb.c
> @@ -345,6 +345,9 @@ void switch_mm_irqs_off(struct mm_struct *prev,
> struct mm_struct *next,
> */
> trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH,
> TLB_FLUSH_ALL);
> } else {
> + /* Let NMI code know that CR3 may not match
> expectations. */

I don't get it. This is in the "ASID is up to date, do not
need a TLB flush" path.

In what case do we have a TLB that is fully up to date, but
a CR3 that does not match expectations?

Doesn't the CR3 check in nmi_uaccess_ok already catch the
window of time where the CR3 has already been switched over
to that of the next task?

What is special about this path wrt nmi_uaccess_ok that is
not also true for the need_flush branch right above it?

What am I missing?

> + this_cpu_write(cpu_tlbstate.loaded_mm, NULL);
> +
> /* The new ASID is already up to date. */
> load_new_mm_cr3(next->pgd, new_asid, false);



--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2018-08-28 02:12:18

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH] x86/nmi: Fix some races in NMI uaccess

On Mon, Aug 27, 2018 at 6:31 PM, Rik van Riel <[email protected]> wrote:
> On Mon, 2018-08-27 at 16:04 -0700, Andy Lutomirski wrote:
>
>> +++ b/arch/x86/mm/tlb.c
>> @@ -345,6 +345,9 @@ void switch_mm_irqs_off(struct mm_struct *prev,
>> struct mm_struct *next,
>> */
>> trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH,
>> TLB_FLUSH_ALL);
>> } else {
>> + /* Let NMI code know that CR3 may not match
>> expectations. */
>
> I don't get it. This is in the "ASID is up to date, do not
> need a TLB flush" path.
>
> In what case do we have a TLB that is fully up to date, but
> a CR3 that does not match expectations?
>
> Doesn't the CR3 check in nmi_uaccess_ok already catch the
> window of time where the CR3 has already been switched over
> to that of the next task?
>
> What is special about this path wrt nmi_uaccess_ok that is
> not also true for the need_flush branch right above it?
>
> What am I missing?

Nothing. My patch is buggy. ETOLITTLESLEEP.

I could drop this part of the patch entirely. Or I could drop the
loaded_mm->pgd == __va(read_cr3_pa() check and instead make sure that
loaded_mm is NULL at any point at which loaded_mm might not match CR3.
The latter will be faster in any (hypothetical) virtualization
environment where CR3 reads trap. I don't know if we have any such
cases where perf works and we care about performance, though.

--Andy

2018-08-28 13:52:35

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH] x86/nmi: Fix some races in NMI uaccess

On Mon, 2018-08-27 at 19:10 -0700, Andy Lutomirski wrote:
> On Mon, Aug 27, 2018 at 6:31 PM, Rik van Riel <[email protected]>
> wrote:
>
> > What is special about this path wrt nmi_uaccess_ok that is
> > not also true for the need_flush branch right above it?
> >
> > What am I missing?
>
> Nothing. My patch is buggy. ETOLITTLESLEEP.
>
> I could drop this part of the patch entirely. Or I could drop the
> loaded_mm->pgd == __va(read_cr3_pa() check and instead make sure that
> loaded_mm is NULL at any point at which loaded_mm might not match
> CR3.
> The latter will be faster in any (hypothetical) virtualization
> environment where CR3 reads trap. I don't know if we have any such
> cases where perf works and we care about performance, though.

Moving that loaded_mm = NULL assignment up a few
lines, so it comes before the "if (need_flush)"
test and covers both branches should cover that,
indeed.

--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2018-08-28 18:02:21

by Rik van Riel

[permalink] [raw]
Subject: [PATCH v2] x86/nmi: Fix some races in NMI uaccess

On Mon, 27 Aug 2018 16:04:16 -0700
Andy Lutomirski <[email protected]> wrote:

> The 0day bot is still chewing on this, but I've tested it a bit locally
> and it seems to do the right thing.

Hi Andy,

the version of the patch below should fix the bug we talked about
in email yesterday. It should automatically cover kernel threads
in lazy TLB mode, because current->mm will be NULL, while the
cpu_tlbstate.loaded_mm should never be NULL.

---8<---
From: Andy Lutomirski <[email protected]>
Subject: x86/nmi: Fix some races in NMI uaccess

In NMI context, we might be in the middle of context switching or in
the middle of switch_mm_irqs_off(). In either case, CR3 might not
match current->mm, which could cause copy_from_user_nmi() and
friends to read the wrong memory.

Fix it by adding a new nmi_uaccess_okay() helper and checking it in
copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.

Cc: [email protected]
Cc: Peter Zijlstra <[email protected]>
Cc: Nadav Amit <[email protected]>
Signed-off-by: Rik van Riel <[email protected]>
Signed-off-by: Andy Lutomirski <[email protected]>
---
arch/x86/events/core.c | 2 +-
arch/x86/include/asm/tlbflush.h | 15 +++++++++++++++
arch/x86/lib/usercopy.c | 5 +++++
arch/x86/mm/tlb.c | 9 ++++++++-
4 files changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 5f4829f10129..dfb2f7c0d019 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2465,7 +2465,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs

perf_callchain_store(entry, regs->ip);

- if (!current->mm)
+ if (!nmi_uaccess_okay())
return;

if (perf_callchain_user32(regs, entry))
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 29c9da6c62fc..dafe649b18e1 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -246,6 +246,21 @@ struct tlb_state {
};
DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);

+/*
+ * Blindly accessing user memory from NMI context can be dangerous
+ * if we're in the middle of switching the current user task or
+ * switching the loaded mm. It can also be dangerous if we
+ * interrupted some kernel code that was temporarily using a
+ * different mm.
+ */
+static inline bool nmi_uaccess_okay(void)
+{
+ struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
+ struct mm_struct *current_mm = current->mm;
+
+ return (loaded_mm == current_mm);
+}
+
/* Initialize cr4 shadow for this CPU. */
static inline void cr4_init_shadow(void)
{
diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
index c8c6ad0d58b8..3f435d7fca5e 100644
--- a/arch/x86/lib/usercopy.c
+++ b/arch/x86/lib/usercopy.c
@@ -7,6 +7,8 @@
#include <linux/uaccess.h>
#include <linux/export.h>

+#include <asm/tlbflush.h>
+
/*
* We rely on the nested NMI work to allow atomic faults from the NMI path; the
* nested NMI paths are careful to preserve CR2.
@@ -19,6 +21,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n)
if (__range_not_ok(from, n, TASK_SIZE))
return n;

+ if (!nmi_uaccess_okay())
+ return n;
+
/*
* Even though this function is typically called from NMI/IRQ context
* disable pagefaults so that its behaviour is consistent even when
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 9517d1b2a281..5b75e2fed2b6 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -305,6 +305,14 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,

choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush);

+ /*
+ * Ensure cpu_tlbstate.loaded_mm differs from current.mm
+ * until the context switch is complete, so NMI handlers
+ * do not try to access userspace. See nmi_uaccess_okay.
+ */
+ this_cpu_write(cpu_tlbstate.loaded_mm, next);
+ smp_wmb();
+
if (need_flush) {
this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id);
this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen);
@@ -335,7 +343,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
if (next != &init_mm)
this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);

- this_cpu_write(cpu_tlbstate.loaded_mm, next);
this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
}



2018-08-29 03:48:19

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v2] x86/nmi: Fix some races in NMI uaccess

On Tue, Aug 28, 2018 at 10:56 AM, Rik van Riel <[email protected]> wrote:
> On Mon, 27 Aug 2018 16:04:16 -0700
> Andy Lutomirski <[email protected]> wrote:
>
>> The 0day bot is still chewing on this, but I've tested it a bit locally
>> and it seems to do the right thing.
>
> Hi Andy,
>
> the version of the patch below should fix the bug we talked about
> in email yesterday. It should automatically cover kernel threads
> in lazy TLB mode, because current->mm will be NULL, while the
> cpu_tlbstate.loaded_mm should never be NULL.
>

That's better than mine. I tweaked it a bit and added some debugging,
and I got this:

https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/fixes&id=dd956eba16646fd0b15c3c0741269dfd84452dac

I made the loaded_mm handling a little more conservative to make it
more obvious that switch_mm_irqs_off() is safe regardless of exactly
when it gets called relative to switching current.

2018-08-29 15:19:08

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH v2] x86/nmi: Fix some races in NMI uaccess

On Tue, 2018-08-28 at 20:46 -0700, Andy Lutomirski wrote:
> On Tue, Aug 28, 2018 at 10:56 AM, Rik van Riel <[email protected]>
> wrote:
> > On Mon, 27 Aug 2018 16:04:16 -0700
> > Andy Lutomirski <[email protected]> wrote:
> >
> > > The 0day bot is still chewing on this, but I've tested it a bit
> > > locally
> > > and it seems to do the right thing.
> >
> > Hi Andy,
> >
> > the version of the patch below should fix the bug we talked about
> > in email yesterday. It should automatically cover kernel threads
> > in lazy TLB mode, because current->mm will be NULL, while the
> > cpu_tlbstate.loaded_mm should never be NULL.
> >
>
> That's better than mine. I tweaked it a bit and added some
> debugging,
> and I got this:
>
>
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/fixes&id=dd956eba16646fd0b15c3c0741269dfd84452dac
>
> I made the loaded_mm handling a little more conservative to make it
> more obvious that switch_mm_irqs_off() is safe regardless of exactly
> when it gets called relative to switching current.

I am not convinced that the dance of writing
cpu_tlbstate.loaded_mm twice, with a barrier on
each end, is useful or necessary.

At the time switch_mm_irqs_off returns, nmi_uaccess_ok()
will still return false, because we have not switched
"current" to the task that owns the next mm_struct yet.

We just have to make sure to:
1) Change cpu_tlbstate.loaded_mm before we manipulate
CR3, and
2) Change "current" only once enough of the mm stuff has
been switched, __switch_to seems to get that right.

Between the time switch_mm_irqs_off() sets cpu_tlbstate
to the next mm, and __switch_to moves() over current,
nmi_uaccess_ok() will return false.

--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2018-08-29 15:37:46

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v2] x86/nmi: Fix some races in NMI uaccess

On Wed, Aug 29, 2018 at 8:17 AM, Rik van Riel <[email protected]> wrote:
> On Tue, 2018-08-28 at 20:46 -0700, Andy Lutomirski wrote:
>> On Tue, Aug 28, 2018 at 10:56 AM, Rik van Riel <[email protected]>
>> wrote:
>> > On Mon, 27 Aug 2018 16:04:16 -0700
>> > Andy Lutomirski <[email protected]> wrote:
>> >
>> > > The 0day bot is still chewing on this, but I've tested it a bit
>> > > locally
>> > > and it seems to do the right thing.
>> >
>> > Hi Andy,
>> >
>> > the version of the patch below should fix the bug we talked about
>> > in email yesterday. It should automatically cover kernel threads
>> > in lazy TLB mode, because current->mm will be NULL, while the
>> > cpu_tlbstate.loaded_mm should never be NULL.
>> >
>>
>> That's better than mine. I tweaked it a bit and added some
>> debugging,
>> and I got this:
>>
>>
> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/fixes&id=dd956eba16646fd0b15c3c0741269dfd84452dac
>>
>> I made the loaded_mm handling a little more conservative to make it
>> more obvious that switch_mm_irqs_off() is safe regardless of exactly
>> when it gets called relative to switching current.
>
> I am not convinced that the dance of writing
> cpu_tlbstate.loaded_mm twice, with a barrier on
> each end, is useful or necessary.
>
> At the time switch_mm_irqs_off returns, nmi_uaccess_ok()
> will still return false, because we have not switched
> "current" to the task that owns the next mm_struct yet.
>
> We just have to make sure to:
> 1) Change cpu_tlbstate.loaded_mm before we manipulate
> CR3, and
> 2) Change "current" only once enough of the mm stuff has
> been switched, __switch_to seems to get that right.
>
> Between the time switch_mm_irqs_off() sets cpu_tlbstate
> to the next mm, and __switch_to moves() over current,
> nmi_uaccess_ok() will return false.

All true, but I think it stops working as soon as someone starts
calling switch_mm_irqs_off() for some other reason, such as during
text_poke(). And that was the original motivation for this patch.

2018-08-29 15:51:03

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH v2] x86/nmi: Fix some races in NMI uaccess

On Wed, 2018-08-29 at 08:36 -0700, Andy Lutomirski wrote:
> On Wed, Aug 29, 2018 at 8:17 AM, Rik van Riel <[email protected]>
> wrote:
> > On Tue, 2018-08-28 at 20:46 -0700, Andy Lutomirski wrote:
> > > On Tue, Aug 28, 2018 at 10:56 AM, Rik van Riel <[email protected]>
> > > wrote:
> > > > On Mon, 27 Aug 2018 16:04:16 -0700
> > > > Andy Lutomirski <[email protected]> wrote:
> > > >
> > > > > The 0day bot is still chewing on this, but I've tested it a
> > > > > bit
> > > > > locally
> > > > > and it seems to do the right thing.
> > > >
> > > > Hi Andy,
> > > >
> > > > the version of the patch below should fix the bug we talked
> > > > about
> > > > in email yesterday. It should automatically cover kernel
> > > > threads
> > > > in lazy TLB mode, because current->mm will be NULL, while the
> > > > cpu_tlbstate.loaded_mm should never be NULL.
> > > >
> > >
> > > That's better than mine. I tweaked it a bit and added some
> > > debugging,
> > > and I got this:
> > >
> > >
> >
> >
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/fixes&id=dd956eba16646fd0b15c3c0741269dfd84452dac
> > >
> > > I made the loaded_mm handling a little more conservative to make
> > > it
> > > more obvious that switch_mm_irqs_off() is safe regardless of
> > > exactly
> > > when it gets called relative to switching current.
> >
> > I am not convinced that the dance of writing
> > cpu_tlbstate.loaded_mm twice, with a barrier on
> > each end, is useful or necessary.
> >
> > At the time switch_mm_irqs_off returns, nmi_uaccess_ok()
> > will still return false, because we have not switched
> > "current" to the task that owns the next mm_struct yet.
> >
> > We just have to make sure to:
> > 1) Change cpu_tlbstate.loaded_mm before we manipulate
> > CR3, and
> > 2) Change "current" only once enough of the mm stuff has
> > been switched, __switch_to seems to get that right.
> >
> > Between the time switch_mm_irqs_off() sets cpu_tlbstate
> > to the next mm, and __switch_to moves() over current,
> > nmi_uaccess_ok() will return false.
>
> All true, but I think it stops working as soon as someone starts
> calling switch_mm_irqs_off() for some other reason, such as during
> text_poke(). And that was the original motivation for this patch.

How does calling switch_mm_irqs_off() for text_poke()
change around current->mm and cpu_tlbstate.loaded_mm?

Does current->mm stay the same throughout the entire
text_poke() chain, while cpustate.loaded_mm is the
only thing that is changed out?

If so, then yes the double assignment is indeed
necessary. Good point.

--
All Rights Reversed.


Attachments:
signature.asc (499.00 B)
This is a digitally signed message part

2018-08-29 16:16:37

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v2] x86/nmi: Fix some races in NMI uaccess

On Wed, Aug 29, 2018 at 8:49 AM, Rik van Riel <[email protected]> wrote:
> On Wed, 2018-08-29 at 08:36 -0700, Andy Lutomirski wrote:
>> On Wed, Aug 29, 2018 at 8:17 AM, Rik van Riel <[email protected]>
>> wrote:
>> > On Tue, 2018-08-28 at 20:46 -0700, Andy Lutomirski wrote:
>> > > On Tue, Aug 28, 2018 at 10:56 AM, Rik van Riel <[email protected]>
>> > > wrote:
>> > > > On Mon, 27 Aug 2018 16:04:16 -0700
>> > > > Andy Lutomirski <[email protected]> wrote:
>> > > >
>> > > > > The 0day bot is still chewing on this, but I've tested it a
>> > > > > bit
>> > > > > locally
>> > > > > and it seems to do the right thing.
>> > > >
>> > > > Hi Andy,
>> > > >
>> > > > the version of the patch below should fix the bug we talked
>> > > > about
>> > > > in email yesterday. It should automatically cover kernel
>> > > > threads
>> > > > in lazy TLB mode, because current->mm will be NULL, while the
>> > > > cpu_tlbstate.loaded_mm should never be NULL.
>> > > >
>> > >
>> > > That's better than mine. I tweaked it a bit and added some
>> > > debugging,
>> > > and I got this:
>> > >
>> > >
>> >
>> >
> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/fixes&id=dd956eba16646fd0b15c3c0741269dfd84452dac
>> > >
>> > > I made the loaded_mm handling a little more conservative to make
>> > > it
>> > > more obvious that switch_mm_irqs_off() is safe regardless of
>> > > exactly
>> > > when it gets called relative to switching current.
>> >
>> > I am not convinced that the dance of writing
>> > cpu_tlbstate.loaded_mm twice, with a barrier on
>> > each end, is useful or necessary.
>> >
>> > At the time switch_mm_irqs_off returns, nmi_uaccess_ok()
>> > will still return false, because we have not switched
>> > "current" to the task that owns the next mm_struct yet.
>> >
>> > We just have to make sure to:
>> > 1) Change cpu_tlbstate.loaded_mm before we manipulate
>> > CR3, and
>> > 2) Change "current" only once enough of the mm stuff has
>> > been switched, __switch_to seems to get that right.
>> >
>> > Between the time switch_mm_irqs_off() sets cpu_tlbstate
>> > to the next mm, and __switch_to moves() over current,
>> > nmi_uaccess_ok() will return false.
>>
>> All true, but I think it stops working as soon as someone starts
>> calling switch_mm_irqs_off() for some other reason, such as during
>> text_poke(). And that was the original motivation for this patch.
>
> How does calling switch_mm_irqs_off() for text_poke()
> change around current->mm and cpu_tlbstate.loaded_mm?
>
> Does current->mm stay the same throughout the entire
> text_poke() chain, while cpustate.loaded_mm is the
> only thing that is changed out?

This is exactly what happens. It seemed considerably more complicated
and error-prone to fiddle with current->mm. Instead the idea was to
turn off IRQs, get NMI to stay out of the way, and put everything back
the way it was before the scheduler, any syscalls, etc could notice
that we were messing around.

--Andy