2022-09-29 22:49:51

by Edgecombe, Rick P

[permalink] [raw]
Subject: [PATCH v2 28/39] x86/cet/shstk: Introduce map_shadow_stack syscall

When operating with shadow stacks enabled, the kernel will automatically
allocate shadow stacks for new threads, however in some cases userspace
will need additional shadow stacks. The main example of this is the
ucontext family of functions, which require userspace allocating and
pivoting to userspace managed stacks.

Unlike most other user memory permissions, shadow stacks need to be
provisioned with special data in order to be useful. They need to be setup
with a restore token so that userspace can pivot to them via the RSTORSSP
instruction. But, the security design of shadow stack's is that they
should not be written to except in limited circumstances. This presents a
problem for userspace, as to how userspace can provision this special
data, without allowing for the shadow stack to be generally writable.

Previously, a new PROT_SHADOW_STACK was attempted, which could be
mprotect()ed from RW permissions after the data was provisioned. This was
found to not be secure enough, as other thread's could write to the
shadow stack during the writable window.

The kernel can use a special instruction, WRUSS, to write directly to
userspace shadow stacks. So the solution can be that memory can be mapped
as shadow stack permissions from the beginning (never generally writable
in userspace), and the kernel itself can write the restore token.

First, a new madvise() flag was explored, which could operate on the
PROT_SHADOW_STACK memory. This had a couple downsides:
1. Extra checks were needed in mprotect() to prevent writable memory from
ever becoming PROT_SHADOW_STACK.
2. Extra checks/vma state were needed in the new madvise() to prevent
restore tokens being written into the middle of pre-used shadow stacks.
It is ideal to prevent restore tokens being added at arbitrary
locations, so the check was to make sure the shadow stack had never been
written to.
3. It stood out from the rest of the madvise flags, as more of direct
action than a hint at future desired behavior.

So rather than repurpose two existing syscalls (mmap, madvise) that don't
quite fit, just implement a new map_shadow_stack syscall to allow
userspace to map and setup new shadow stacks in one step. While ucontext
is the primary motivator, userspace may have other unforeseen reasons to
setup it's own shadow stacks using the WRSS instruction. Towards this
provide a flag so that stacks can be optionally setup securely for the
common case of ucontext without enabling WRSS. Or potentially have the
kernel set up the shadow stack in some new way.

The following example demonstrates how to create a new shadow stack with
map_shadow_stack:
void *shstk = map_shadow_stack(adrr, stack_size, SHADOW_STACK_SET_TOKEN);

Signed-off-by: Rick Edgecombe <[email protected]>

---

v2:
- Change syscall to take address like mmap() for CRIU's usage

v1:
- New patch (replaces PROT_SHADOW_STACK).

arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/uapi/asm/mman.h | 2 ++
arch/x86/kernel/shstk.c | 48 +++++++++++++++++++++-----
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 2 +-
kernel/sys_ni.c | 1 +
6 files changed, 46 insertions(+), 9 deletions(-)

diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index c84d12608cd2..d9639e3e0a33 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -372,6 +372,7 @@
448 common process_mrelease sys_process_mrelease
449 common futex_waitv sys_futex_waitv
450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common map_shadow_stack sys_map_shadow_stack

#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h
index 775dbd3aff73..c9fc57c88fcc 100644
--- a/arch/x86/include/uapi/asm/mman.h
+++ b/arch/x86/include/uapi/asm/mman.h
@@ -12,6 +12,8 @@
((key) & 0x8 ? VM_PKEY_BIT3 : 0))
#endif

+#define SHADOW_STACK_SET_TOKEN 0x1 /* Set up a restore token in the shadow stack */
+
#include <asm-generic/mman.h>

#endif /* _ASM_X86_MMAN_H */
diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c
index 04442134aadd..873830d63adc 100644
--- a/arch/x86/kernel/shstk.c
+++ b/arch/x86/kernel/shstk.c
@@ -17,6 +17,7 @@
#include <linux/compat.h>
#include <linux/sizes.h>
#include <linux/user.h>
+#include <linux/syscalls.h>
#include <asm/msr.h>
#include <asm/fpu/xstate.h>
#include <asm/fpu/types.h>
@@ -62,24 +63,34 @@ static int create_rstor_token(unsigned long ssp, unsigned long *token_addr)
if (write_user_shstk_64((u64 __user *)addr, (u64)ssp))
return -EFAULT;

- *token_addr = addr;
+ if (token_addr)
+ *token_addr = addr;

return 0;
}

-static unsigned long alloc_shstk(unsigned long size)
+static unsigned long alloc_shstk(unsigned long addr, unsigned long size,
+ unsigned long token_offset, bool set_res_tok)
{
int flags = MAP_ANONYMOUS | MAP_PRIVATE;
struct mm_struct *mm = current->mm;
- unsigned long addr, unused;
+ unsigned long mapped_addr, unused;

mmap_write_lock(mm);
- addr = do_mmap(NULL, addr, size, PROT_READ, flags,
- VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL);
-
+ mapped_addr = do_mmap(NULL, addr, size, PROT_READ, flags,
+ VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL);
mmap_write_unlock(mm);

- return addr;
+ if (!set_res_tok || IS_ERR_VALUE(addr))
+ goto out;
+
+ if (create_rstor_token(mapped_addr + token_offset, NULL)) {
+ vm_munmap(mapped_addr, size);
+ return -EINVAL;
+ }
+
+out:
+ return mapped_addr;
}

static void unmap_shadow_stack(u64 base, u64 size)
@@ -122,7 +133,7 @@ int shstk_setup(void)
return -EOPNOTSUPP;

size = PAGE_ALIGN(min_t(unsigned long long, rlimit(RLIMIT_STACK), SZ_4G));
- addr = alloc_shstk(size);
+ addr = alloc_shstk(0, size, size, false);
if (IS_ERR_VALUE(addr))
return PTR_ERR((void *)addr);

@@ -174,6 +185,7 @@ int shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags,


stack_size = PAGE_ALIGN(stack_size);
+ addr = alloc_shstk(0, stack_size, 0, false);
if (IS_ERR_VALUE(addr))
return PTR_ERR((void *)addr);

@@ -395,6 +407,26 @@ int shstk_disable(void)
return 0;
}

+
+SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags)
+{
+ unsigned long aligned_size;
+
+ if (!cpu_feature_enabled(X86_FEATURE_SHSTK))
+ return -ENOSYS;
+
+ /*
+ * An overflow would result in attempting to write the restore token
+ * to the wrong location. Not catastrophic, but just return the right
+ * error code and block it.
+ */
+ aligned_size = PAGE_ALIGN(size);
+ if (aligned_size < size)
+ return -EOVERFLOW;
+
+ return alloc_shstk(addr, aligned_size, size, flags & SHADOW_STACK_SET_TOKEN);
+}
+
long cet_prctl(struct task_struct *task, int option, unsigned long features)
{
if (option == ARCH_CET_LOCK) {
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index a34b0f9a9972..3ae05cbdea5b 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1056,6 +1056,7 @@ asmlinkage long sys_memfd_secret(unsigned int flags);
asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len,
unsigned long home_node,
unsigned long flags);
+asmlinkage long sys_map_shadow_stack(unsigned long addr, unsigned long size, unsigned int flags);

/*
* Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..b12940ec5926 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -887,7 +887,7 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
__SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)

#undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452

/*
* 32 bit systems traditionally used different
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 860b2dcf3ac4..cb9aebd34646 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -381,6 +381,7 @@ COND_SYSCALL(vm86old);
COND_SYSCALL(modify_ldt);
COND_SYSCALL(vm86);
COND_SYSCALL(kexec_file_load);
+COND_SYSCALL(map_shadow_stack);

/* s390 */
COND_SYSCALL(s390_pci_mmio_read);
--
2.17.1


2022-10-03 22:47:39

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 28/39] x86/cet/shstk: Introduce map_shadow_stack syscall

On Thu, Sep 29, 2022 at 03:29:25PM -0700, Rick Edgecombe wrote:
> [...]
> The following example demonstrates how to create a new shadow stack with
> map_shadow_stack:
> void *shstk = map_shadow_stack(adrr, stack_size, SHADOW_STACK_SET_TOKEN);

typo: addr

> [...]
> +451 common map_shadow_stack sys_map_shadow_stack

Isn't this "64", not "common"?

> [...]
> +#define SHADOW_STACK_SET_TOKEN 0x1 /* Set up a restore token in the shadow stack */

I think this should get an intro comment, like:

/* Flags for map_shadow_stack(2) */

Also, as with the other UAPI fields, please use "(1ULL << 0)" here.

> @@ -62,24 +63,34 @@ static int create_rstor_token(unsigned long ssp, unsigned long *token_addr)
> if (write_user_shstk_64((u64 __user *)addr, (u64)ssp))
> return -EFAULT;
>
> - *token_addr = addr;
> + if (token_addr)
> + *token_addr = addr;
>
> return 0;
> }
>

Can this just be collapsed into the patch that introduces create_rstor_token()?

> -static unsigned long alloc_shstk(unsigned long size)
> +static unsigned long alloc_shstk(unsigned long addr, unsigned long size,
> + unsigned long token_offset, bool set_res_tok)
> {
> int flags = MAP_ANONYMOUS | MAP_PRIVATE;
> struct mm_struct *mm = current->mm;
> - unsigned long addr, unused;
> + unsigned long mapped_addr, unused;
>
> mmap_write_lock(mm);
> - addr = do_mmap(NULL, addr, size, PROT_READ, flags,

Oops, I missed in the other patch that "addr" was being passed here.
(uninitialized?)

> - VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL);
> -
> + mapped_addr = do_mmap(NULL, addr, size, PROT_READ, flags,
> + VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL);

I don't see do_mmap() doing anything here to avoid remapping a prior vma
as shstk. Is the intention to allow userspace to convert existing VMAs?
This has caused pain in the past, perhaps force MAP_FIXED_NOREPLACE ?

> [...]
> @@ -174,6 +185,7 @@ int shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags,
>
>
> stack_size = PAGE_ALIGN(stack_size);
> + addr = alloc_shstk(0, stack_size, 0, false);
> if (IS_ERR_VALUE(addr))
> return PTR_ERR((void *)addr);
>

As mentioned earlier, I was expecting this patch to replace a (missing)
call to alloc_shstk. i.e. expecting:

- addr = alloc_shstk(stack_size);

> @@ -395,6 +407,26 @@ int shstk_disable(void)
> return 0;
> }
>
> +
> +SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags)

Please add kern-doc for this, with some notes. E.g. at least one thing isn't immediately
obvious, maybe more: "addr" must be a multiple of 8.

> +{
> + unsigned long aligned_size;
> +
> + if (!cpu_feature_enabled(X86_FEATURE_SHSTK))
> + return -ENOSYS;

This needs to explicitly reject unknown flags[1], or expanding them in the
future becomes very painful:

if (flags & ~(SHADOW_STACK_SET_TOKEN))
return -EINVAL;


[1] https://docs.kernel.org/process/adding-syscalls.html#designing-the-api-planning-for-extension

> +
> + /*
> + * An overflow would result in attempting to write the restore token
> + * to the wrong location. Not catastrophic, but just return the right
> + * error code and block it.
> + */
> + aligned_size = PAGE_ALIGN(size);
> + if (aligned_size < size)
> + return -EOVERFLOW;

The intention here is to allow userspace to ask for _less_ than a page
size multiple, and to put the restore token there?

Is it worth adding a check for size >= 8 here? Or, I guess it would just
immediately crash on the next call?

> +
> + return alloc_shstk(addr, aligned_size, size, flags & SHADOW_STACK_SET_TOKEN);
> +}

--
Kees Cook

2022-10-04 23:16:47

by Edgecombe, Rick P

[permalink] [raw]
Subject: Re: [PATCH v2 28/39] x86/cet/shstk: Introduce map_shadow_stack syscall

On Mon, 2022-10-03 at 15:23 -0700, Kees Cook wrote:
> On Thu, Sep 29, 2022 at 03:29:25PM -0700, Rick Edgecombe wrote:
> > [...]
> > The following example demonstrates how to create a new shadow stack
> > with
> > map_shadow_stack:
> > void *shstk = map_shadow_stack(adrr, stack_size,
> > SHADOW_STACK_SET_TOKEN);
>
> typo: addr

Yep, thanks.


>
> > [...]
> > +451 common map_shadow_stack sys_map_shadow_stac
> > k
>
> Isn't this "64", not "common"?

Yes, this should have been changed after dropping 32 bit.

>
> > [...]
> > +#define SHADOW_STACK_SET_TOKEN 0x1 /* Set up a restore token
> > in the shadow stack */
>
> I think this should get an intro comment, like:
>
> /* Flags for map_shadow_stack(2) */
>
> Also, as with the other UAPI fields, please use "(1ULL << 0)" here.

Ok.

>
> > @@ -62,24 +63,34 @@ static int create_rstor_token(unsigned long
> > ssp, unsigned long *token_addr)
> > if (write_user_shstk_64((u64 __user *)addr, (u64)ssp))
> > return -EFAULT;
> >
> > - *token_addr = addr;
> > + if (token_addr)
> > + *token_addr = addr;
> >
> > return 0;
> > }
> >
>
> Can this just be collapsed into the patch that introduces
> create_rstor_token()?

I mean, yea, that would be simpler. Breaking the changes apart was left
over from when the signals placed a token, but didn't need this extra
bit of functionality.

>
> > -static unsigned long alloc_shstk(unsigned long size)
> > +static unsigned long alloc_shstk(unsigned long addr, unsigned long
> > size,
> > + unsigned long token_offset, bool
> > set_res_tok)
> > {
> > int flags = MAP_ANONYMOUS | MAP_PRIVATE;
> > struct mm_struct *mm = current->mm;
> > - unsigned long addr, unused;
> > + unsigned long mapped_addr, unused;
> >
> > mmap_write_lock(mm);
> > - addr = do_mmap(NULL, addr, size, PROT_READ, flags,
>
> Oops, I missed in the other patch that "addr" was being passed here.
> (uninitialized?)

Argh, yes. I'll initialize in that patch and remove it here.

>
> > - VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL);
> > -
> > + mapped_addr = do_mmap(NULL, addr, size, PROT_READ, flags,
> > + VM_SHADOW_STACK | VM_WRITE, 0, &unused,
> > NULL);
>
> I don't see do_mmap() doing anything here to avoid remapping a prior
> vma
> as shstk. Is the intention to allow userspace to convert existing
> VMAs?
> This has caused pain in the past, perhaps force MAP_FIXED_NOREPLACE ?

No that is not the intention. It should fail and MAP_FIXED_NOREPLACE
looks like it will fit the bill. Thanks!

>
> > [...]
> > @@ -174,6 +185,7 @@ int shstk_alloc_thread_stack(struct task_struct
> > *tsk, unsigned long clone_flags,
> >
> >
> > stack_size = PAGE_ALIGN(stack_size);
> > + addr = alloc_shstk(0, stack_size, 0, false);
> > if (IS_ERR_VALUE(addr))
> > return PTR_ERR((void *)addr);
> >
>
> As mentioned earlier, I was expecting this patch to replace a
> (missing)
> call to alloc_shstk. i.e. expecting:
>
> - addr = alloc_shstk(stack_size);
>
> > @@ -395,6 +407,26 @@ int shstk_disable(void)
> > return 0;
> > }
> >
> > +
> > +SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned
> > long, size, unsigned int, flags)
>
> Please add kern-doc for this, with some notes. E.g. at least one
> thing isn't immediately
> obvious, maybe more: "addr" must be a multiple of 8.

Ok.

>
> > +{
> > + unsigned long aligned_size;
> > +
> > + if (!cpu_feature_enabled(X86_FEATURE_SHSTK))
> > + return -ENOSYS;
>
> This needs to explicitly reject unknown flags[1], or expanding them
> in the
> future becomes very painful:
>
> if (flags & ~(SHADOW_STACK_SET_TOKEN))
> return -EINVAL;
>
>
> [1]
> https://docs.kernel.org/process/adding-syscalls.html#designing-the-api-planning-for-extension
>

Ok, good idea.

> > +
> > + /*
> > + * An overflow would result in attempting to write the restore
> > token
> > + * to the wrong location. Not catastrophic, but just return the
> > right
> > + * error code and block it.
> > + */
> > + aligned_size = PAGE_ALIGN(size);
> > + if (aligned_size < size)
> > + return -EOVERFLOW;
>
> The intention here is to allow userspace to ask for _less_ than a
> page
> size multiple, and to put the restore token there?
>
> Is it worth adding a check for size >= 8 here? Or, I guess it would
> just
> immediately crash on the next call?

Funny you should ask... The glibc changes were doing this and then
looking for the token at the end of the length that it passed (not the
page aligned length). I had changed the kernel at one point to be page
aligned and then had the fun of debugging the results. I thought, glibc
is just wasting shadow stack. It should ask for page aligned shadow
stacks. But HJ argued that the kernel shouldn't second guess what
userspace is asking for based on HW page size details that don't have
to do with the software interface. I was convinced by that argument,
even though glibc is still wasting space.

I could still be convinced the other way though. Glibc still has time
to (and should) change. But yea, that was actually the intention.

>
> > +
> > + return alloc_shstk(addr, aligned_size, size, flags &
> > SHADOW_STACK_SET_TOKEN);
> > +}
>
>

2022-10-04 23:26:57

by H.J. Lu

[permalink] [raw]
Subject: Re: [PATCH v2 28/39] x86/cet/shstk: Introduce map_shadow_stack syscall

On Tue, Oct 4, 2022 at 3:56 PM Edgecombe, Rick P
<[email protected]> wrote:
>
> On Mon, 2022-10-03 at 15:23 -0700, Kees Cook wrote:
> > On Thu, Sep 29, 2022 at 03:29:25PM -0700, Rick Edgecombe wrote:
> > > [...]
> > > The following example demonstrates how to create a new shadow stack
> > > with
> > > map_shadow_stack:
> > > void *shstk = map_shadow_stack(adrr, stack_size,
> > > SHADOW_STACK_SET_TOKEN);
> >
> > typo: addr
>
> Yep, thanks.
>
>
> >
> > > [...]
> > > +451 common map_shadow_stack sys_map_shadow_stac
> > > k
> >
> > Isn't this "64", not "common"?
>
> Yes, this should have been changed after dropping 32 bit.

We don't support ia32. But this is used for x32 which is supported.

> >
> > > [...]
> > > +#define SHADOW_STACK_SET_TOKEN 0x1 /* Set up a restore token
> > > in the shadow stack */
> >
> > I think this should get an intro comment, like:
> >
> > /* Flags for map_shadow_stack(2) */
> >
> > Also, as with the other UAPI fields, please use "(1ULL << 0)" here.
>
> Ok.
>
> >
> > > @@ -62,24 +63,34 @@ static int create_rstor_token(unsigned long
> > > ssp, unsigned long *token_addr)
> > > if (write_user_shstk_64((u64 __user *)addr, (u64)ssp))
> > > return -EFAULT;
> > >
> > > - *token_addr = addr;
> > > + if (token_addr)
> > > + *token_addr = addr;
> > >
> > > return 0;
> > > }
> > >
> >
> > Can this just be collapsed into the patch that introduces
> > create_rstor_token()?
>
> I mean, yea, that would be simpler. Breaking the changes apart was left
> over from when the signals placed a token, but didn't need this extra
> bit of functionality.
>
> >
> > > -static unsigned long alloc_shstk(unsigned long size)
> > > +static unsigned long alloc_shstk(unsigned long addr, unsigned long
> > > size,
> > > + unsigned long token_offset, bool
> > > set_res_tok)
> > > {
> > > int flags = MAP_ANONYMOUS | MAP_PRIVATE;
> > > struct mm_struct *mm = current->mm;
> > > - unsigned long addr, unused;
> > > + unsigned long mapped_addr, unused;
> > >
> > > mmap_write_lock(mm);
> > > - addr = do_mmap(NULL, addr, size, PROT_READ, flags,
> >
> > Oops, I missed in the other patch that "addr" was being passed here.
> > (uninitialized?)
>
> Argh, yes. I'll initialize in that patch and remove it here.
>
> >
> > > - VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL);
> > > -
> > > + mapped_addr = do_mmap(NULL, addr, size, PROT_READ, flags,
> > > + VM_SHADOW_STACK | VM_WRITE, 0, &unused,
> > > NULL);
> >
> > I don't see do_mmap() doing anything here to avoid remapping a prior
> > vma
> > as shstk. Is the intention to allow userspace to convert existing
> > VMAs?
> > This has caused pain in the past, perhaps force MAP_FIXED_NOREPLACE ?
>
> No that is not the intention. It should fail and MAP_FIXED_NOREPLACE
> looks like it will fit the bill. Thanks!
>
> >
> > > [...]
> > > @@ -174,6 +185,7 @@ int shstk_alloc_thread_stack(struct task_struct
> > > *tsk, unsigned long clone_flags,
> > >
> > >
> > > stack_size = PAGE_ALIGN(stack_size);
> > > + addr = alloc_shstk(0, stack_size, 0, false);
> > > if (IS_ERR_VALUE(addr))
> > > return PTR_ERR((void *)addr);
> > >
> >
> > As mentioned earlier, I was expecting this patch to replace a
> > (missing)
> > call to alloc_shstk. i.e. expecting:
> >
> > - addr = alloc_shstk(stack_size);
> >
> > > @@ -395,6 +407,26 @@ int shstk_disable(void)
> > > return 0;
> > > }
> > >
> > > +
> > > +SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned
> > > long, size, unsigned int, flags)
> >
> > Please add kern-doc for this, with some notes. E.g. at least one
> > thing isn't immediately
> > obvious, maybe more: "addr" must be a multiple of 8.
>
> Ok.
>
> >
> > > +{
> > > + unsigned long aligned_size;
> > > +
> > > + if (!cpu_feature_enabled(X86_FEATURE_SHSTK))
> > > + return -ENOSYS;
> >
> > This needs to explicitly reject unknown flags[1], or expanding them
> > in the
> > future becomes very painful:
> >
> > if (flags & ~(SHADOW_STACK_SET_TOKEN))
> > return -EINVAL;
> >
> >
> > [1]
> > https://docs.kernel.org/process/adding-syscalls.html#designing-the-api-planning-for-extension
> >
>
> Ok, good idea.
>
> > > +
> > > + /*
> > > + * An overflow would result in attempting to write the restore
> > > token
> > > + * to the wrong location. Not catastrophic, but just return the
> > > right
> > > + * error code and block it.
> > > + */
> > > + aligned_size = PAGE_ALIGN(size);
> > > + if (aligned_size < size)
> > > + return -EOVERFLOW;
> >
> > The intention here is to allow userspace to ask for _less_ than a
> > page
> > size multiple, and to put the restore token there?
> >
> > Is it worth adding a check for size >= 8 here? Or, I guess it would
> > just
> > immediately crash on the next call?
>
> Funny you should ask... The glibc changes were doing this and then
> looking for the token at the end of the length that it passed (not the
> page aligned length). I had changed the kernel at one point to be page
> aligned and then had the fun of debugging the results. I thought, glibc
> is just wasting shadow stack. It should ask for page aligned shadow
> stacks. But HJ argued that the kernel shouldn't second guess what
> userspace is asking for based on HW page size details that don't have
> to do with the software interface. I was convinced by that argument,
> even though glibc is still wasting space.
>
> I could still be convinced the other way though. Glibc still has time
> to (and should) change. But yea, that was actually the intention.

Glibc requests a shadow stack of a given size and expects the restore
token at the specific location. This is how glibc uses the restore token
to switch to the new shadow stack.

> >
> > > +
> > > + return alloc_shstk(addr, aligned_size, size, flags &
> > > SHADOW_STACK_SET_TOKEN);
> > > +}
> >
> >



--
H.J.

2022-10-10 11:48:41

by Florian Weimer

[permalink] [raw]
Subject: Re: [PATCH v2 28/39] x86/cet/shstk: Introduce map_shadow_stack syscall

* Rick Edgecombe:

> When operating with shadow stacks enabled, the kernel will automatically
> allocate shadow stacks for new threads, however in some cases userspace
> will need additional shadow stacks. The main example of this is the
> ucontext family of functions, which require userspace allocating and
> pivoting to userspace managed stacks.
>
> Unlike most other user memory permissions, shadow stacks need to be
> provisioned with special data in order to be useful. They need to be setup
> with a restore token so that userspace can pivot to them via the RSTORSSP
> instruction. But, the security design of shadow stack's is that they
> should not be written to except in limited circumstances. This presents a
> problem for userspace, as to how userspace can provision this special
> data, without allowing for the shadow stack to be generally writable.
>
> Previously, a new PROT_SHADOW_STACK was attempted, which could be
> mprotect()ed from RW permissions after the data was provisioned. This was
> found to not be secure enough, as other thread's could write to the
> shadow stack during the writable window.
>
> The kernel can use a special instruction, WRUSS, to write directly to
> userspace shadow stacks. So the solution can be that memory can be mapped
> as shadow stack permissions from the beginning (never generally writable
> in userspace), and the kernel itself can write the restore token.
>
> First, a new madvise() flag was explored, which could operate on the
> PROT_SHADOW_STACK memory. This had a couple downsides:
> 1. Extra checks were needed in mprotect() to prevent writable memory from
> ever becoming PROT_SHADOW_STACK.
> 2. Extra checks/vma state were needed in the new madvise() to prevent
> restore tokens being written into the middle of pre-used shadow stacks.
> It is ideal to prevent restore tokens being added at arbitrary
> locations, so the check was to make sure the shadow stack had never been
> written to.
> 3. It stood out from the rest of the madvise flags, as more of direct
> action than a hint at future desired behavior.
>
> So rather than repurpose two existing syscalls (mmap, madvise) that don't
> quite fit, just implement a new map_shadow_stack syscall to allow
> userspace to map and setup new shadow stacks in one step. While ucontext
> is the primary motivator, userspace may have other unforeseen reasons to
> setup it's own shadow stacks using the WRSS instruction. Towards this
> provide a flag so that stacks can be optionally setup securely for the
> common case of ucontext without enabling WRSS. Or potentially have the
> kernel set up the shadow stack in some new way.
>
> The following example demonstrates how to create a new shadow stack with
> map_shadow_stack:
> void *shstk = map_shadow_stack(adrr, stack_size, SHADOW_STACK_SET_TOKEN);

Jason has recently been working on vDSO-based getrandom acceleration.
It needs a way for a userspace thread to allocate userspace memory in a
specific way. Jason proposed to use a vDSO call as the interface, not a
system call.

Maybe this approach is applicable here as well? Or we can come up with
a more general interface for such per-thread allocations?

Thanks,
Florian