2022-11-29 20:16:12

by Jann Horn

[permalink] [raw]
Subject: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

find_timens_vvar_page() doesn't work when current's timens does not match
the timens associated with current->mm.
v6 of the series adding this code [1] had some complicated code to deal
with this case, but v7 [2] removed that.

Since the vvar region is designed to only be accessed by vDSO code, and
vDSO code can't run in kthread context, it should be fine to error out in
this case.

Backporting note: This commit depends on the preceding refactoring patch.

[1] https://lore.kernel.org/lkml/[email protected]/
[2] https://lore.kernel.org/lkml/[email protected]/

Fixes: ee3cda8e4606 ("arm64/vdso: Handle faults on timens page")
Fixes: 74205b3fc2ef ("powerpc/vdso: Add support for time namespaces")
Fixes: dffe11e280a4 ("riscv/vdso: Add support for time namespaces")
Fixes: eeab78b05d20 ("s390/vdso: implement generic vdso time namespace support")
Fixes: af34ebeb866f ("x86/vdso: Handle faults on timens page")
Cc: [email protected]
Signed-off-by: Jann Horn <[email protected]>
---
kernel/time/namespace.c | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/kernel/time/namespace.c b/kernel/time/namespace.c
index 761c0ada5142a..7315d0aeb1d21 100644
--- a/kernel/time/namespace.c
+++ b/kernel/time/namespace.c
@@ -194,6 +194,17 @@ static void timens_setup_vdso_data(struct vdso_data *vdata,

struct page *find_timens_vvar_page(struct vm_area_struct *vma)
{
+ /*
+ * We can't handle faults where current's timens does not match the
+ * timens associated with the mm_struct. This can happen if a page fault
+ * occurs in a kthread that is using kthread_use_mm().
+ */
+ if (current->flags & PF_KTHREAD) {
+ pr_warn("%s: kthread %s/%d tried to fault in timens page\n",
+ __func__, current->comm, current->pid);
+ return NULL;
+ }
+
if (likely(vma->vm_mm == current->mm))
return current->nsproxy->time_ns->vvar_page;

--
2.38.1.584.g0f3c55d4c2-goog


2022-11-29 22:01:33

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

On Tue, Nov 29 2022 at 20:18, Jann Horn wrote:

> find_timens_vvar_page() doesn't work when current's timens does not match
> the timens associated with current->mm.
> v6 of the series adding this code [1] had some complicated code to deal
> with this case, but v7 [2] removed that.
>
> Since the vvar region is designed to only be accessed by vDSO code, and
> vDSO code can't run in kthread context, it should be fine to error out in
> this case.

Should? Either it is correct or not.

But the way more interesting question is:

> struct page *find_timens_vvar_page(struct vm_area_struct *vma)
> {
> + /*
> + * We can't handle faults where current's timens does not match the
> + * timens associated with the mm_struct. This can happen if a page fault
> + * occurs in a kthread that is using kthread_use_mm().
> + */

How does a kthread, which obvioulsy did kthread_use_mm(), end up trying to
fault in the time namespace vvar page?

It's probably something nasty, but the changelog has a big information
void.

It neither answers the obvious question why this is a problem of the
time namespace vvar page and not a general issue versus a kthread, which
borrowed a user mm, ending up in vdso_fault() in the first place?

None of those VDSO (user space) addresses are subject to be faulted in
by anything else than the associated user space task(s).

Thanks,

tglx

2022-11-29 22:39:44

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

On Tue, Nov 29, 2022 at 10:18 PM Thomas Gleixner <[email protected]> wrote:
> On Tue, Nov 29 2022 at 20:18, Jann Horn wrote:
>
> > find_timens_vvar_page() doesn't work when current's timens does not match
> > the timens associated with current->mm.
> > v6 of the series adding this code [1] had some complicated code to deal
> > with this case, but v7 [2] removed that.
> >
> > Since the vvar region is designed to only be accessed by vDSO code, and
> > vDSO code can't run in kthread context, it should be fine to error out in
> > this case.
>
> Should? Either it is correct or not.
>
> But the way more interesting question is:
>
> > struct page *find_timens_vvar_page(struct vm_area_struct *vma)
> > {
> > + /*
> > + * We can't handle faults where current's timens does not match the
> > + * timens associated with the mm_struct. This can happen if a page fault
> > + * occurs in a kthread that is using kthread_use_mm().
> > + */
>
> How does a kthread, which obvioulsy did kthread_use_mm(), end up trying to
> fault in the time namespace vvar page?

By doing copy_from_user()? That's what kthread_use_mm() is for, right?
If you look through the users of kthread_use_mm(), most of them use it
to be able to use the normal usercopy functions. See the users in usb
gadget code, and the VFIO code, and the AMD GPU code. And if you're
doing usercopy on userspace addresses, then you can basically always
hit a vvar page - even if you had somehow checked beforehand what the
address points to, userspace could have moved a vvar region in that
spot in the meantime.

That said, I haven't actually tried it. But I don't think there's
anything in the page fault handling path that distinguishes between
copy_from_user() faults in kthread context and other userspace faults
in a relevant way?

> It's probably something nasty, but the changelog has a big information
> void.
>
> It neither answers the obvious question why this is a problem of the
> time namespace vvar page and not a general issue versus a kthread, which
> borrowed a user mm, ending up in vdso_fault() in the first place?

Is it a problem if a kthread ends up in the other parts of
vdso_fault() or vvar_fault()? From what I can tell, nothing in there
except for the timens stuff is going to care whether it's hit from a
userspace fault or from a kthread.

Though, looking at it again now, I guess the `sym_offset ==
image->sym_vvar_page` path is also going to misbehave, so I guess we
could try to instead make the vdso/vvar fault handlers bail out in
kthread context for all the architectures, since we're only going to
hit that if userspace is deliberately doing something bad...

> None of those VDSO (user space) addresses are subject to be faulted in
> by anything else than the associated user space task(s).

Are you saying that it's not possible or that it doesn't happen when
userspace is well-behaved?

2022-11-29 22:52:10

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

On Tue, Nov 29, 2022 at 11:28 PM Jann Horn <[email protected]> wrote:
>
> On Tue, Nov 29, 2022 at 10:18 PM Thomas Gleixner <[email protected]> wrote:
> > On Tue, Nov 29 2022 at 20:18, Jann Horn wrote:
> >
> > > find_timens_vvar_page() doesn't work when current's timens does not match
> > > the timens associated with current->mm.
> > > v6 of the series adding this code [1] had some complicated code to deal
> > > with this case, but v7 [2] removed that.
> > >
> > > Since the vvar region is designed to only be accessed by vDSO code, and
> > > vDSO code can't run in kthread context, it should be fine to error out in
> > > this case.
> >
> > Should? Either it is correct or not.
> >
> > But the way more interesting question is:
> >
> > > struct page *find_timens_vvar_page(struct vm_area_struct *vma)
> > > {
> > > + /*
> > > + * We can't handle faults where current's timens does not match the
> > > + * timens associated with the mm_struct. This can happen if a page fault
> > > + * occurs in a kthread that is using kthread_use_mm().
> > > + */
> >
> > How does a kthread, which obvioulsy did kthread_use_mm(), end up trying to
> > fault in the time namespace vvar page?
>
> By doing copy_from_user()? That's what kthread_use_mm() is for, right?
> If you look through the users of kthread_use_mm(), most of them use it
> to be able to use the normal usercopy functions. See the users in usb
> gadget code, and the VFIO code, and the AMD GPU code. And if you're
> doing usercopy on userspace addresses, then you can basically always
> hit a vvar page - even if you had somehow checked beforehand what the
> address points to, userspace could have moved a vvar region in that
> spot in the meantime.
>
> That said, I haven't actually tried it. But I don't think there's
> anything in the page fault handling path that distinguishes between
> copy_from_user() faults in kthread context and other userspace faults
> in a relevant way?

Ah, but I guess even if this can happen, it's not actually as bad as I
thought, since kthreads are in init_time_ns, and init_time_ns doesn't
have a ->vvar_page, so this isn't going to lead to anything terrible
like page UAF, and it's just a garbage-in-garbage-out scenario.

2022-11-30 01:18:11

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

On Tue, Nov 29 2022 at 23:28, Jann Horn wrote:
> On Tue, Nov 29, 2022 at 10:18 PM Thomas Gleixner <[email protected]> wrote:
>> But the way more interesting question is:
>>
>> > struct page *find_timens_vvar_page(struct vm_area_struct *vma)
>> > {
>> > + /*
>> > + * We can't handle faults where current's timens does not match the
>> > + * timens associated with the mm_struct. This can happen if a page fault
>> > + * occurs in a kthread that is using kthread_use_mm().
>> > + */
>>
>> How does a kthread, which obvioulsy did kthread_use_mm(), end up trying to
>> fault in the time namespace vvar page?
>
> By doing copy_from_user()? That's what kthread_use_mm() is for, right?
> If you look through the users of kthread_use_mm(), most of them use it
> to be able to use the normal usercopy functions. See the users in usb
> gadget code, and the VFIO code, and the AMD GPU code. And if you're
> doing usercopy on userspace addresses, then you can basically always
> hit a vvar page - even if you had somehow checked beforehand what the
> address points to, userspace could have moved a vvar region in that
> spot in the meantime.
>
> That said, I haven't actually tried it. But I don't think there's
> anything in the page fault handling path that distinguishes between
> copy_from_user() faults in kthread context and other userspace faults
> in a relevant way?

True.

>> It neither answers the obvious question why this is a problem of the
>> time namespace vvar page and not a general issue versus a kthread, which
>> borrowed a user mm, ending up in vdso_fault() in the first place?
>
> Is it a problem if a kthread ends up in the other parts of
> vdso_fault() or vvar_fault()? From what I can tell, nothing in there
> except for the timens stuff is going to care whether it's hit from a
> userspace fault or from a kthread.
>
> Though, looking at it again now, I guess the `sym_offset ==
> image->sym_vvar_page` path is also going to misbehave, so I guess we
> could try to instead make the vdso/vvar fault handlers bail out in
> kthread context for all the architectures, since we're only going to
> hit that if userspace is deliberately doing something bad...

Deliberately or stupdily, does not matter. But squashing the problem
right at the entry point is definitely the better than making it a
special case of timens.

>> None of those VDSO (user space) addresses are subject to be faulted in
>> by anything else than the associated user space task(s).
>
> Are you saying that it's not possible or that it doesn't happen when
> userspace is well-behaved?

My subconcious self told me that a kthread won't do that unless it's
buggered which makes the vdso fault path the least of our problems, but
thinking more about it: You are right, that there are ways that the
kthread ends up with a vdso page address.... Bah!

Still my point stands that this is not a timens VDSO issue, but an issue
of: kthread tries to fault in a VDSO page of whatever nature.

Thanks,

tglx


2022-11-30 01:33:32

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

On Tue, Nov 29 2022 at 23:34, Jann Horn wrote:
> On Tue, Nov 29, 2022 at 11:28 PM Jann Horn <[email protected]> wrote:
>> That said, I haven't actually tried it. But I don't think there's
>> anything in the page fault handling path that distinguishes between
>> copy_from_user() faults in kthread context and other userspace faults
>> in a relevant way?
>
> Ah, but I guess even if this can happen, it's not actually as bad as I
> thought, since kthreads are in init_time_ns, and init_time_ns doesn't
> have a ->vvar_page, so this isn't going to lead to anything terrible
> like page UAF, and it's just a garbage-in-garbage-out scenario.

True, but catching the kthread -> fault (vvar/vdso page) scenario
definitely has a value.

Thanks,

tglx

2022-11-30 23:09:55

by David Laight

[permalink] [raw]
Subject: RE: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

From: Thomas Gleixner
> Sent: 30 November 2022 00:08
....
> >> None of those VDSO (user space) addresses are subject to be faulted in
> >> by anything else than the associated user space task(s).
> >
> > Are you saying that it's not possible or that it doesn't happen when
> > userspace is well-behaved?
>
> My subconcious self told me that a kthread won't do that unless it's
> buggered which makes the vdso fault path the least of our problems, but
> thinking more about it: You are right, that there are ways that the
> kthread ends up with a vdso page address.... Bah!
>
> Still my point stands that this is not a timens VDSO issue, but an issue
> of: kthread tries to fault in a VDSO page of whatever nature.

Isn't there also the kernel code path where one user thread
reads data from another processes address space.
(It does some unusual calls to the iov_import() functions.)
I can't remember whether it is used by strace or gdb.
But there is certainly the option of getting to access
an 'invalid' address in the other process and then faulting.

ISTR not being convinced that there was a correct check
for user/kernel addresses in it either.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2022-12-01 10:07:30

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH 2/2] time/namespace: Forbid timens page faults under kthread_use_mm()

On Wed, Nov 30, 2022 at 11:48 PM David Laight <[email protected]> wrote:
> From: Thomas Gleixner
> > Sent: 30 November 2022 00:08
> ....
> > >> None of those VDSO (user space) addresses are subject to be faulted in
> > >> by anything else than the associated user space task(s).
> > >
> > > Are you saying that it's not possible or that it doesn't happen when
> > > userspace is well-behaved?
> >
> > My subconcious self told me that a kthread won't do that unless it's
> > buggered which makes the vdso fault path the least of our problems, but
> > thinking more about it: You are right, that there are ways that the
> > kthread ends up with a vdso page address.... Bah!
> >
> > Still my point stands that this is not a timens VDSO issue, but an issue
> > of: kthread tries to fault in a VDSO page of whatever nature.
>
> Isn't there also the kernel code path where one user thread
> reads data from another processes address space.
> (It does some unusual calls to the iov_import() functions.)
> I can't remember whether it is used by strace or gdb.
> But there is certainly the option of getting to access
> an 'invalid' address in the other process and then faulting.

That's a different mechanism. /proc/$pid/mem and process_vm_readv()
and PTRACE_PEEKDATA and so on go through get_user_pages_remote() or
pin_user_pages_remote(), which bail out on VMAs with VM_IO or
VM_PFNMAP. The ptrace-based access can also fall back to using
vma->vm_ops->access(), but the special_mapping_vmops used by the vvar
VMA explicitly don't have such a handler:

static const struct vm_operations_struct special_mapping_vmops = {
.close = special_mapping_close,
.fault = special_mapping_fault,
.mremap = special_mapping_mremap,
.name = special_mapping_name,
/* vDSO code relies that VVAR can't be accessed remotely */
.access = NULL,
.may_split = special_mapping_split,
};

One path that I'm not sure about is the Intel i915 GPU virtualization
codepath ppgtt_populate_shadow_entry -> intel_gvt_dma_map_guest_page
-> gvt_dma_map_page -> gvt_pin_guest_page -> vfio_pin_pages ->
vfio_iommu_type1_pin_pages -> vfio_pin_page_external -> vaddr_get_pfns
-> follow_fault_pfn -> fixup_user_fault -> handle_mm_fault. That looks
like it might actually be able to trigger pagefault handling on the
vvar mapping from another process.

> ISTR not being convinced that there was a correct check
> for user/kernel addresses in it either.

The get_user_pages_remote() machinery only works on areas that are
mapped by VMAs (__get_user_pages() bails out if find_extend_vma()
fails and the address is not located in the gate area). There are no
VMAs for kernel memory.