The kernel has recently added support for shadow stacks, currently
x86 only using their CET feature but both arm64 and RISC-V have
equivalent features (GCS and Zicfiss respectively), I am actively
working on GCS[1]. With shadow stacks the hardware maintains an
additional stack containing only the return addresses for branch
instructions which is not generally writeable by userspace and ensures
that any returns are to the recorded addresses. This provides some
protection against ROP attacks and making it easier to collect call
stacks. These shadow stacks are allocated in the address space of the
userspace process.
Our API for shadow stacks does not currently offer userspace any
flexiblity for managing the allocation of shadow stacks for newly
created threads, instead the kernel allocates a new shadow stack with
the same size as the normal stack whenever a thread is created with the
feature enabled. The stacks allocated in this way are freed by the
kernel when the thread exits or shadow stacks are disabled for the
thread. This lack of flexibility and control isn't ideal, in the vast
majority of cases the shadow stack will be over allocated and the
implicit allocation and deallocation is not consistent with other
interfaces. As far as I can tell the interface is done in this manner
mainly because the shadow stack patches were in development since before
clone3() was implemented.
Since clone3() is readily extensible let's add support for specifying a
shadow stack when creating a new thread or process in a similar manner
to how the normal stack is specified, keeping the current implicit
allocation behaviour if one is not specified either with clone3() or
through the use of clone(). Unlike normal stacks only the shadow stack
size is specified, similar issues to those that lead to the creation of
map_shadow_stack() apply.
Please note that the x86 portions of this code are build tested only, I
don't appear to have a system that can run CET avaible to me, I have
done testing with an integration into my pending work for GCS. There is
some possibility that the arm64 implementation may require the use of
clone3() and explicit userspace allocation of shadow stacks, this is
still under discussion.
A new architecture feature Kconfig option for shadow stacks is added as
here, this was suggested as part of the review comments for the arm64
GCS series and since we need to detect if shadow stacks are supported it
seemed sensible to roll it in here.
[1] https://lore.kernel.org/r/[email protected]/
Signed-off-by: Mark Brown <[email protected]>
---
Changes in v4:
- Formatting changes.
- Use a define for minimum shadow stack size and move some basic
validation to fork.c.
- Link to v3: https://lore.kernel.org/r/[email protected]
Changes in v3:
- Rebase onto v6.7-rc2.
- Remove stale shadow_stack in internal kargs.
- If a shadow stack is specified unconditionally use it regardless of
CLONE_ parameters.
- Force enable shadow stacks in the selftest.
- Update changelogs for RISC-V feature rename.
- Link to v2: https://lore.kernel.org/r/[email protected]
Changes in v2:
- Rebase onto v6.7-rc1.
- Remove ability to provide preallocated shadow stack, just specify the
desired size.
- Link to v1: https://lore.kernel.org/r/[email protected]
---
Mark Brown (5):
mm: Introduce ARCH_HAS_USER_SHADOW_STACK
fork: Add shadow stack support to clone3()
selftests/clone3: Factor more of main loop into test_clone3()
selftests/clone3: Allow tests to flag if -E2BIG is a valid error code
kselftest/clone3: Test shadow stack support
arch/x86/Kconfig | 1 +
arch/x86/include/asm/shstk.h | 11 +-
arch/x86/kernel/process.c | 2 +-
arch/x86/kernel/shstk.c | 56 ++++--
fs/proc/task_mmu.c | 2 +-
include/linux/mm.h | 2 +-
include/linux/sched/task.h | 1 +
include/uapi/linux/sched.h | 4 +
kernel/fork.c | 53 ++++--
mm/Kconfig | 6 +
tools/testing/selftests/clone3/clone3.c | 200 +++++++++++++++++-----
tools/testing/selftests/clone3/clone3_selftests.h | 7 +
12 files changed, 268 insertions(+), 77 deletions(-)
---
base-commit: 98b1cc82c4affc16f5598d4fa14b1858671b2263
change-id: 20231019-clone3-shadow-stack-15d40d2bf536
Best regards,
--
Mark Brown <[email protected]>
Since multiple architectures have support for shadow stacks and we need to
select support for this feature in several places in the generic code
provide a generic config option that the architectures can select.
Suggested-by: David Hildenbrand <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Signed-off-by: Mark Brown <[email protected]>
---
arch/x86/Kconfig | 1 +
fs/proc/task_mmu.c | 2 +-
include/linux/mm.h | 2 +-
mm/Kconfig | 6 ++++++
4 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3762f41bb092..14b7703a9a2b 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1952,6 +1952,7 @@ config X86_USER_SHADOW_STACK
depends on AS_WRUSS
depends on X86_64
select ARCH_USES_HIGH_VMA_FLAGS
+ select ARCH_HAS_USER_SHADOW_STACK
select X86_CET
help
Shadow stack protection is a hardware feature that detects function
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index ef2eb12906da..f0a904aeee8e 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -699,7 +699,7 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR
[ilog2(VM_UFFD_MINOR)] = "ui",
#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
-#ifdef CONFIG_X86_USER_SHADOW_STACK
+#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK
[ilog2(VM_SHADOW_STACK)] = "ss",
#endif
};
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 418d26608ece..10462f354614 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -341,7 +341,7 @@ extern unsigned int kobjsize(const void *objp);
#endif
#endif /* CONFIG_ARCH_HAS_PKEYS */
-#ifdef CONFIG_X86_USER_SHADOW_STACK
+#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK
/*
* VM_SHADOW_STACK should not be set with VM_SHARED because of lack of
* support core mm.
diff --git a/mm/Kconfig b/mm/Kconfig
index 89971a894b60..6713bb3b0b48 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1270,6 +1270,12 @@ config LOCK_MM_AND_FIND_VMA
bool
depends on !STACK_GROWSUP
+config ARCH_HAS_USER_SHADOW_STACK
+ bool
+ help
+ The architecture has hardware support for userspace shadow call
+ stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss).
+
source "mm/damon/Kconfig"
endmenu
--
2.30.2
In order to make it easier to add more configuration for the tests and
more support for runtime detection of when tests can be run pass the
structure describing the tests into test_clone3() rather than picking
the arguments out of it and have that function do all the per-test work.
No functional change.
Signed-off-by: Mark Brown <[email protected]>
---
tools/testing/selftests/clone3/clone3.c | 77 ++++++++++++++++-----------------
1 file changed, 37 insertions(+), 40 deletions(-)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
index 3c9bf0cd82a8..1108bd8e36d6 100644
--- a/tools/testing/selftests/clone3/clone3.c
+++ b/tools/testing/selftests/clone3/clone3.c
@@ -30,6 +30,19 @@ enum test_mode {
CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG,
};
+typedef bool (*filter_function)(void);
+typedef size_t (*size_function)(void);
+
+struct test {
+ const char *name;
+ uint64_t flags;
+ size_t size;
+ size_function size_function;
+ int expected;
+ enum test_mode test_mode;
+ filter_function filter;
+};
+
static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
{
struct __clone_args args = {
@@ -104,30 +117,40 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
return 0;
}
-static bool test_clone3(uint64_t flags, size_t size, int expected,
- enum test_mode test_mode)
+static void test_clone3(const struct test *test)
{
+ size_t size;
int ret;
+ if (test->filter && test->filter()) {
+ ksft_test_result_skip("%s\n", test->name);
+ return;
+ }
+
+ if (test->size_function)
+ size = test->size_function();
+ else
+ size = test->size;
+
+ ksft_print_msg("Running test '%s'\n", test->name);
+
ksft_print_msg(
"[%d] Trying clone3() with flags %#" PRIx64 " (size %zu)\n",
- getpid(), flags, size);
- ret = call_clone3(flags, size, test_mode);
+ getpid(), test->flags, size);
+ ret = call_clone3(test->flags, size, test->test_mode);
ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n",
- getpid(), ret, expected);
- if (ret != expected) {
+ getpid(), ret, test->expected);
+ if (ret != test->expected) {
ksft_print_msg(
"[%d] Result (%d) is different than expected (%d)\n",
- getpid(), ret, expected);
- return false;
+ getpid(), ret, test->expected);
+ ksft_test_result_fail("%s\n", test->name);
+ return;
}
- return true;
+ ksft_test_result_pass("%s\n", test->name);
}
-typedef bool (*filter_function)(void);
-typedef size_t (*size_function)(void);
-
static bool not_root(void)
{
if (getuid() != 0) {
@@ -155,16 +178,6 @@ static size_t page_size_plus_8(void)
return getpagesize() + 8;
}
-struct test {
- const char *name;
- uint64_t flags;
- size_t size;
- size_function size_function;
- int expected;
- enum test_mode test_mode;
- filter_function filter;
-};
-
static const struct test tests[] = {
{
.name = "simple clone3()",
@@ -314,24 +327,8 @@ int main(int argc, char *argv[])
ksft_set_plan(ARRAY_SIZE(tests));
test_clone3_supported();
- for (i = 0; i < ARRAY_SIZE(tests); i++) {
- if (tests[i].filter && tests[i].filter()) {
- ksft_test_result_skip("%s\n", tests[i].name);
- continue;
- }
-
- if (tests[i].size_function)
- size = tests[i].size_function();
- else
- size = tests[i].size;
-
- ksft_print_msg("Running test '%s'\n", tests[i].name);
-
- ksft_test_result(test_clone3(tests[i].flags, size,
- tests[i].expected,
- tests[i].test_mode),
- "%s\n", tests[i].name);
- }
+ for (i = 0; i < ARRAY_SIZE(tests); i++)
+ test_clone3(&tests[i]);
ksft_finished();
}
--
2.30.2
Unlike with the normal stack there is no API for configuring the the shadow
stack for a new thread, instead the kernel will dynamically allocate a new
shadow stack with the same size as the normal stack. This appears to be due
to the shadow stack series having been in development since before the more
extensible clone3() was added rather than anything more deliberate.
Add a parameter to clone3() specifying the size of a shadow stack for
the newly created process. If no shadow stack is specified then the
existing implicit allocation behaviour is maintained.
If the architecture does not support shadow stacks the shadow stack size
parameter must be zero, architectures that do support the feature are
expected to enforce the same requirement on individual systems that lack
shadow stack support.
Update the existing x86 implementation to pay attention to the newly added
arguments, in order to maintain compatibility we use the existing behaviour
if no shadow stack is specified. Minimal validation is done of the supplied
parameters, detailed enforcement is left to when the thread is executed.
Since we are now using more fields from the kernel_clone_args we pass that
into the shadow stack code rather than individual fields.
Signed-off-by: Mark Brown <[email protected]>
---
arch/x86/include/asm/shstk.h | 11 +++++----
arch/x86/kernel/process.c | 2 +-
arch/x86/kernel/shstk.c | 56 ++++++++++++++++++++++++++++++--------------
include/linux/sched/task.h | 1 +
include/uapi/linux/sched.h | 4 ++++
kernel/fork.c | 53 +++++++++++++++++++++++++++++++----------
6 files changed, 92 insertions(+), 35 deletions(-)
diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h
index 42fee8959df7..8be7b0a909c3 100644
--- a/arch/x86/include/asm/shstk.h
+++ b/arch/x86/include/asm/shstk.h
@@ -6,6 +6,7 @@
#include <linux/types.h>
struct task_struct;
+struct kernel_clone_args;
struct ksignal;
#ifdef CONFIG_X86_USER_SHADOW_STACK
@@ -16,8 +17,8 @@ struct thread_shstk {
long shstk_prctl(struct task_struct *task, int option, unsigned long arg2);
void reset_thread_features(void);
-unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags,
- unsigned long stack_size);
+unsigned long shstk_alloc_thread_stack(struct task_struct *p,
+ const struct kernel_clone_args *args);
void shstk_free(struct task_struct *p);
int setup_signal_shadow_stack(struct ksignal *ksig);
int restore_signal_shadow_stack(void);
@@ -26,8 +27,10 @@ static inline long shstk_prctl(struct task_struct *task, int option,
unsigned long arg2) { return -EINVAL; }
static inline void reset_thread_features(void) {}
static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p,
- unsigned long clone_flags,
- unsigned long stack_size) { return 0; }
+ const struct kernel_clone_args *args)
+{
+ return 0;
+}
static inline void shstk_free(struct task_struct *p) {}
static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; }
static inline int restore_signal_shadow_stack(void) { return 0; }
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b6f4e8399fca..a9ca80ea5056 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -207,7 +207,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
* is disabled, new_ssp will remain 0, and fpu_clone() will know not to
* update it.
*/
- new_ssp = shstk_alloc_thread_stack(p, clone_flags, args->stack_size);
+ new_ssp = shstk_alloc_thread_stack(p, args);
if (IS_ERR_VALUE(new_ssp))
return PTR_ERR((void *)new_ssp);
diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c
index 59e15dd8d0f8..0d1325d2d94a 100644
--- a/arch/x86/kernel/shstk.c
+++ b/arch/x86/kernel/shstk.c
@@ -191,38 +191,58 @@ void reset_thread_features(void)
current->thread.features_locked = 0;
}
-unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags,
- unsigned long stack_size)
+unsigned long shstk_alloc_thread_stack(struct task_struct *tsk,
+ const struct kernel_clone_args *args)
{
struct thread_shstk *shstk = &tsk->thread.shstk;
+ unsigned long clone_flags = args->flags;
unsigned long addr, size;
/*
* If shadow stack is not enabled on the new thread, skip any
- * switch to a new shadow stack.
+ * implicit switch to a new shadow stack and reject attempts to
+ * explciitly specify one.
*/
- if (!features_enabled(ARCH_SHSTK_SHSTK))
- return 0;
+ if (!features_enabled(ARCH_SHSTK_SHSTK)) {
+ if (args->shadow_stack_size)
+ return (unsigned long)ERR_PTR(-EINVAL);
- /*
- * For CLONE_VFORK the child will share the parents shadow stack.
- * Make sure to clear the internal tracking of the thread shadow
- * stack so the freeing logic run for child knows to leave it alone.
- */
- if (clone_flags & CLONE_VFORK) {
- shstk->base = 0;
- shstk->size = 0;
return 0;
}
/*
- * For !CLONE_VM the child will use a copy of the parents shadow
- * stack.
+ * If the user specified a shadow stack then do some basic
+ * validation and use it, otherwise fall back to a default
+ * shadow stack size if the clone_flags don't indicate an
+ * allocation is unneeded.
*/
- if (!(clone_flags & CLONE_VM))
- return 0;
+ if (args->shadow_stack_size) {
+ size = args->shadow_stack_size;
+ } else {
+ /*
+ * For CLONE_VFORK the child will share the parents
+ * shadow stack. Make sure to clear the internal
+ * tracking of the thread shadow stack so the freeing
+ * logic run for child knows to leave it alone.
+ */
+ if (clone_flags & CLONE_VFORK) {
+ shstk->base = 0;
+ shstk->size = 0;
+ return 0;
+ }
+
+ /*
+ * For !CLONE_VM the child will use a copy of the
+ * parents shadow stack.
+ */
+ if (!(clone_flags & CLONE_VM))
+ return 0;
+
+ size = args->stack_size;
+
+ }
- size = adjust_shstk_size(stack_size);
+ size = adjust_shstk_size(size);
addr = alloc_shstk(0, size, 0, false);
if (IS_ERR_VALUE(addr))
return addr;
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index a23af225c898..e86a09cfccd8 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -41,6 +41,7 @@ struct kernel_clone_args {
void *fn_arg;
struct cgroup *cgrp;
struct css_set *cset;
+ unsigned long shadow_stack_size;
};
/*
diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
index 3bac0a8ceab2..a998b6d0c897 100644
--- a/include/uapi/linux/sched.h
+++ b/include/uapi/linux/sched.h
@@ -84,6 +84,8 @@
* kernel's limit of nested PID namespaces.
* @cgroup: If CLONE_INTO_CGROUP is specified set this to
* a file descriptor for the cgroup.
+ * @shadow_stack_size: Specify the size of the shadow stack to allocate
+ * for the child process.
*
* The structure is versioned by size and thus extensible.
* New struct members must go at the end of the struct and
@@ -101,12 +103,14 @@ struct clone_args {
__aligned_u64 set_tid;
__aligned_u64 set_tid_size;
__aligned_u64 cgroup;
+ __aligned_u64 shadow_stack_size;
};
#endif
#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */
#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */
#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */
+#define CLONE_ARGS_SIZE_VER3 96 /* sizeof fourth published struct */
/*
* Scheduling policies
diff --git a/kernel/fork.c b/kernel/fork.c
index 10917c3e1f03..35131acd43d2 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -121,6 +121,11 @@
*/
#define MAX_THREADS FUTEX_TID_MASK
+/*
+ * Require that shadow stacks can store at least one element
+ */
+#define SHADOW_STACK_SIZE_MIN 8
+
/*
* Protected counters by write_lock_irq(&tasklist_lock)
*/
@@ -3067,7 +3072,9 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs,
CLONE_ARGS_SIZE_VER1);
BUILD_BUG_ON(offsetofend(struct clone_args, cgroup) !=
CLONE_ARGS_SIZE_VER2);
- BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER2);
+ BUILD_BUG_ON(offsetofend(struct clone_args, shadow_stack_size) !=
+ CLONE_ARGS_SIZE_VER3);
+ BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER3);
if (unlikely(usize > PAGE_SIZE))
return -E2BIG;
@@ -3100,16 +3107,17 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs,
return -EINVAL;
*kargs = (struct kernel_clone_args){
- .flags = args.flags,
- .pidfd = u64_to_user_ptr(args.pidfd),
- .child_tid = u64_to_user_ptr(args.child_tid),
- .parent_tid = u64_to_user_ptr(args.parent_tid),
- .exit_signal = args.exit_signal,
- .stack = args.stack,
- .stack_size = args.stack_size,
- .tls = args.tls,
- .set_tid_size = args.set_tid_size,
- .cgroup = args.cgroup,
+ .flags = args.flags,
+ .pidfd = u64_to_user_ptr(args.pidfd),
+ .child_tid = u64_to_user_ptr(args.child_tid),
+ .parent_tid = u64_to_user_ptr(args.parent_tid),
+ .exit_signal = args.exit_signal,
+ .stack = args.stack,
+ .stack_size = args.stack_size,
+ .tls = args.tls,
+ .set_tid_size = args.set_tid_size,
+ .cgroup = args.cgroup,
+ .shadow_stack_size = args.shadow_stack_size,
};
if (args.set_tid &&
@@ -3150,6 +3158,27 @@ static inline bool clone3_stack_valid(struct kernel_clone_args *kargs)
return true;
}
+/**
+ * clone3_shadow_stack_valid - check and prepare shadow stack
+ * @kargs: kernel clone args
+ *
+ * Verify that shadow stacks are only enabled if supported.
+ */
+static inline bool clone3_shadow_stack_valid(struct kernel_clone_args *kargs)
+{
+ if (!kargs->shadow_stack_size)
+ return true;
+
+ if (kargs->shadow_stack_size < SHADOW_STACK_SIZE_MIN)
+ return false;
+
+ if (kargs->shadow_stack_size > rlimit(RLIMIT_STACK))
+ return false;
+
+ /* The architecture must check support on the specific machine */
+ return IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK);
+}
+
static bool clone3_args_valid(struct kernel_clone_args *kargs)
{
/* Verify that no unknown flags are passed along. */
@@ -3172,7 +3201,7 @@ static bool clone3_args_valid(struct kernel_clone_args *kargs)
kargs->exit_signal)
return false;
- if (!clone3_stack_valid(kargs))
+ if (!clone3_stack_valid(kargs) || !clone3_shadow_stack_valid(kargs))
return false;
return true;
--
2.30.2
The clone_args structure is extensible, with the syscall passing in the
length of the structure. Inside the kernel we use copy_struct_from_user()
to read the struct but this has the unfortunate side effect of silently
accepting some overrun in the structure size providing the extra data is
all zeros. This means that we can't discover the clone3() features that
the running kernel supports by simply probing with various struct sizes.
We need to check this for the benefit of test systems which run newer
kselftests on old kernels.
Add a flag which can be set on a test to indicate that clone3() may return
-E2BIG due to the use of newer struct versions. Currently no tests need
this but it will become an issue for testing clone3() support for shadow
stacks, the support for shadow stacks is already present on x86.
Signed-off-by: Mark Brown <[email protected]>
---
tools/testing/selftests/clone3/clone3.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
index 1108bd8e36d6..6adbfd14c841 100644
--- a/tools/testing/selftests/clone3/clone3.c
+++ b/tools/testing/selftests/clone3/clone3.c
@@ -39,6 +39,7 @@ struct test {
size_t size;
size_function size_function;
int expected;
+ bool e2big_valid;
enum test_mode test_mode;
filter_function filter;
};
@@ -141,6 +142,11 @@ static void test_clone3(const struct test *test)
ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n",
getpid(), ret, test->expected);
if (ret != test->expected) {
+ if (test->e2big_valid && ret == -E2BIG) {
+ ksft_print_msg("Test reported -E2BIG\n");
+ ksft_test_result_skip("%s\n", test->name);
+ return;
+ }
ksft_print_msg(
"[%d] Result (%d) is different than expected (%d)\n",
getpid(), ret, test->expected);
--
2.30.2
Add basic test coverage for specifying the shadow stack for a newly
created thread via clone3(), including coverage of the newly extended
argument structure.
In order to facilitate testing on systems without userspace shadow stack
support we manually enable shadow stacks on startup, this is architecture
specific due to the use of an arch_prctl() on x86. Due to interactions with
potential userspace locking of features we actually detect support for
shadow stacks on the running system by attempting to allocate a shadow
stack page during initialisation using map_shadow_stack(), warning if this
succeeds when the enable failed.
Signed-off-by: Mark Brown <[email protected]>
---
tools/testing/selftests/clone3/clone3.c | 117 ++++++++++++++++++++++
tools/testing/selftests/clone3/clone3_selftests.h | 7 ++
2 files changed, 124 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c
index 6adbfd14c841..dbe52582573c 100644
--- a/tools/testing/selftests/clone3/clone3.c
+++ b/tools/testing/selftests/clone3/clone3.c
@@ -11,6 +11,7 @@
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
+#include <sys/mman.h>
#include <sys/syscall.h>
#include <sys/types.h>
#include <sys/un.h>
@@ -21,6 +22,10 @@
#include "../kselftest.h"
#include "clone3_selftests.h"
+static bool shadow_stack_enabled;
+static bool shadow_stack_supported;
+static size_t max_supported_args_size;
+
enum test_mode {
CLONE3_ARGS_NO_TEST,
CLONE3_ARGS_ALL_0,
@@ -28,6 +33,7 @@ enum test_mode {
CLONE3_ARGS_INVAL_EXIT_SIGNAL_NEG,
CLONE3_ARGS_INVAL_EXIT_SIGNAL_CSIG,
CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG,
+ CLONE3_ARGS_SHADOW_STACK,
};
typedef bool (*filter_function)(void);
@@ -44,6 +50,36 @@ struct test {
filter_function filter;
};
+#ifndef __NR_map_shadow_stack
+#define __NR_map_shadow_stack 453
+#endif
+
+/*
+ * We check for shadow stack support by attempting to use
+ * map_shadow_stack() since features may have been locked by the
+ * dynamic linker resulting in spurious errors when we attempt to
+ * enable on startup. We warn if the enable failed.
+ */
+static void test_shadow_stack_supported(void)
+{
+ long shadow_stack;
+
+ shadow_stack = syscall(__NR_map_shadow_stack, 0, getpagesize(), 0);
+ if (shadow_stack == -1) {
+ ksft_print_msg("map_shadow_stack() not supported\n");
+ } else if ((void *)shadow_stack == MAP_FAILED) {
+ ksft_print_msg("Failed to map shadow stack\n");
+ } else {
+ ksft_print_msg("Shadow stack supportd\n");
+ shadow_stack_supported = true;
+
+ if (!shadow_stack_enabled)
+ ksft_print_msg("Mapped but did not enable shadow stack\n");
+
+ munmap((void *)shadow_stack, getpagesize());
+ }
+}
+
static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
{
struct __clone_args args = {
@@ -89,6 +125,9 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
case CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG:
args.exit_signal = 0x00000000000000f0ULL;
break;
+ case CLONE3_ARGS_SHADOW_STACK:
+ args.shadow_stack_size = getpagesize();
+ break;
}
memcpy(&args_ext.args, &args, sizeof(struct __clone_args));
@@ -179,6 +218,26 @@ static bool no_timenamespace(void)
return true;
}
+static bool have_shadow_stack(void)
+{
+ if (shadow_stack_supported) {
+ ksft_print_msg("Shadow stack supported\n");
+ return true;
+ }
+
+ return false;
+}
+
+static bool no_shadow_stack(void)
+{
+ if (!shadow_stack_supported) {
+ ksft_print_msg("Shadow stack not supported\n");
+ return true;
+ }
+
+ return false;
+}
+
static size_t page_size_plus_8(void)
{
return getpagesize() + 8;
@@ -322,16 +381,74 @@ static const struct test tests[] = {
.expected = -EINVAL,
.test_mode = CLONE3_ARGS_NO_TEST,
},
+ {
+ .name = "Shadow stack on system with shadow stack",
+ .flags = CLONE_VM,
+ .size = 0,
+ .expected = 0,
+ .e2big_valid = true,
+ .test_mode = CLONE3_ARGS_SHADOW_STACK,
+ .filter = no_shadow_stack,
+ },
+ {
+ .name = "Shadow stack on system without shadow stack",
+ .flags = CLONE_VM,
+ .size = 0,
+ .expected = -EINVAL,
+ .e2big_valid = true,
+ .test_mode = CLONE3_ARGS_SHADOW_STACK,
+ .filter = have_shadow_stack,
+ },
};
+#ifdef __x86_64__
+#define ARCH_SHSTK_ENABLE 0x5001
+#define ARCH_SHSTK_SHSTK (1ULL << 0)
+
+#define ARCH_PRCTL(arg1, arg2) \
+({ \
+ long _ret; \
+ register long _num asm("eax") = __NR_arch_prctl; \
+ register long _arg1 asm("rdi") = (long)(arg1); \
+ register long _arg2 asm("rsi") = (long)(arg2); \
+ \
+ asm volatile ( \
+ "syscall\n" \
+ : "=a"(_ret) \
+ : "r"(_arg1), "r"(_arg2), \
+ "0"(_num) \
+ : "rcx", "r11", "memory", "cc" \
+ ); \
+ _ret; \
+})
+
+#define ENABLE_SHADOW_STACK
+static inline void enable_shadow_stack(void)
+{
+ int ret = ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK);
+ if (ret == 0)
+ shadow_stack_enabled = true;
+}
+
+#endif
+
+#ifndef ENABLE_SHADOW_STACK
+static void enable_shadow_stack(void)
+{
+}
+#endif
+
int main(int argc, char *argv[])
{
size_t size;
int i;
+ enable_shadow_stack();
+
ksft_print_header();
ksft_set_plan(ARRAY_SIZE(tests));
test_clone3_supported();
+ test_shadow_stack_supported();
for (i = 0; i < ARRAY_SIZE(tests); i++)
test_clone3(&tests[i]);
diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h
index 3d2663fe50ba..2e06127091f5 100644
--- a/tools/testing/selftests/clone3/clone3_selftests.h
+++ b/tools/testing/selftests/clone3/clone3_selftests.h
@@ -31,6 +31,13 @@ struct __clone_args {
__aligned_u64 set_tid;
__aligned_u64 set_tid_size;
__aligned_u64 cgroup;
+#ifndef CLONE_ARGS_SIZE_VER2
+#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */
+#endif
+ __aligned_u64 shadow_stack_size;
+#ifndef CLONE_ARGS_SIZE_VER3
+#define CLONE_ARGS_SIZE_VER3 96 /* sizeof fourth published struct */
+#endif
};
static pid_t sys_clone3(struct __clone_args *args, size_t size)
--
2.30.2
On Tue, Nov 28, 2023 at 06:22:40PM +0000, Mark Brown wrote:
>Unlike with the normal stack there is no API for configuring the the shadow
>stack for a new thread, instead the kernel will dynamically allocate a new
>shadow stack with the same size as the normal stack. This appears to be due
>to the shadow stack series having been in development since before the more
>extensible clone3() was added rather than anything more deliberate.
>
>Add a parameter to clone3() specifying the size of a shadow stack for
>the newly created process. If no shadow stack is specified then the
>existing implicit allocation behaviour is maintained.
>
>If the architecture does not support shadow stacks the shadow stack size
>parameter must be zero, architectures that do support the feature are
>expected to enforce the same requirement on individual systems that lack
>shadow stack support.
>
>Update the existing x86 implementation to pay attention to the newly added
>arguments, in order to maintain compatibility we use the existing behaviour
>if no shadow stack is specified. Minimal validation is done of the supplied
>parameters, detailed enforcement is left to when the thread is executed.
>Since we are now using more fields from the kernel_clone_args we pass that
>into the shadow stack code rather than individual fields.
>
>Signed-off-by: Mark Brown <[email protected]>
>---
> arch/x86/include/asm/shstk.h | 11 +++++----
> arch/x86/kernel/process.c | 2 +-
> arch/x86/kernel/shstk.c | 56 ++++++++++++++++++++++++++++++--------------
> include/linux/sched/task.h | 1 +
> include/uapi/linux/sched.h | 4 ++++
> kernel/fork.c | 53 +++++++++++++++++++++++++++++++----------
> 6 files changed, 92 insertions(+), 35 deletions(-)
>
>diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h
>index 42fee8959df7..8be7b0a909c3 100644
>--- a/arch/x86/include/asm/shstk.h
>+++ b/arch/x86/include/asm/shstk.h
>@@ -6,6 +6,7 @@
> #include <linux/types.h>
>
> struct task_struct;
>+struct kernel_clone_args;
> struct ksignal;
>
> #ifdef CONFIG_X86_USER_SHADOW_STACK
>@@ -16,8 +17,8 @@ struct thread_shstk {
>
> long shstk_prctl(struct task_struct *task, int option, unsigned long arg2);
> void reset_thread_features(void);
>-unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags,
>- unsigned long stack_size);
>+unsigned long shstk_alloc_thread_stack(struct task_struct *p,
>+ const struct kernel_clone_args *args);
> void shstk_free(struct task_struct *p);
> int setup_signal_shadow_stack(struct ksignal *ksig);
> int restore_signal_shadow_stack(void);
>@@ -26,8 +27,10 @@ static inline long shstk_prctl(struct task_struct *task, int option,
> unsigned long arg2) { return -EINVAL; }
> static inline void reset_thread_features(void) {}
> static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p,
>- unsigned long clone_flags,
>- unsigned long stack_size) { return 0; }
>+ const struct kernel_clone_args *args)
>+{
>+ return 0;
>+}
> static inline void shstk_free(struct task_struct *p) {}
> static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; }
> static inline int restore_signal_shadow_stack(void) { return 0; }
>diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
>index b6f4e8399fca..a9ca80ea5056 100644
>--- a/arch/x86/kernel/process.c
>+++ b/arch/x86/kernel/process.c
>@@ -207,7 +207,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
> * is disabled, new_ssp will remain 0, and fpu_clone() will know not to
> * update it.
> */
>- new_ssp = shstk_alloc_thread_stack(p, clone_flags, args->stack_size);
>+ new_ssp = shstk_alloc_thread_stack(p, args);
> if (IS_ERR_VALUE(new_ssp))
> return PTR_ERR((void *)new_ssp);
>
>diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c
>index 59e15dd8d0f8..0d1325d2d94a 100644
>--- a/arch/x86/kernel/shstk.c
>+++ b/arch/x86/kernel/shstk.c
>@@ -191,38 +191,58 @@ void reset_thread_features(void)
> current->thread.features_locked = 0;
> }
>
>-unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags,
>- unsigned long stack_size)
>+unsigned long shstk_alloc_thread_stack(struct task_struct *tsk,
>+ const struct kernel_clone_args *args)
> {
> struct thread_shstk *shstk = &tsk->thread.shstk;
>+ unsigned long clone_flags = args->flags;
> unsigned long addr, size;
>
> /*
> * If shadow stack is not enabled on the new thread, skip any
>- * switch to a new shadow stack.
>+ * implicit switch to a new shadow stack and reject attempts to
>+ * explciitly specify one.
> */
>- if (!features_enabled(ARCH_SHSTK_SHSTK))
>- return 0;
>+ if (!features_enabled(ARCH_SHSTK_SHSTK)) {
>+ if (args->shadow_stack_size)
>+ return (unsigned long)ERR_PTR(-EINVAL);
>
>- /*
>- * For CLONE_VFORK the child will share the parents shadow stack.
>- * Make sure to clear the internal tracking of the thread shadow
>- * stack so the freeing logic run for child knows to leave it alone.
>- */
>- if (clone_flags & CLONE_VFORK) {
>- shstk->base = 0;
>- shstk->size = 0;
> return 0;
> }
>
> /*
>- * For !CLONE_VM the child will use a copy of the parents shadow
>- * stack.
>+ * If the user specified a shadow stack then do some basic
>+ * validation and use it, otherwise fall back to a default
>+ * shadow stack size if the clone_flags don't indicate an
>+ * allocation is unneeded.
> */
>- if (!(clone_flags & CLONE_VM))
>- return 0;
>+ if (args->shadow_stack_size) {
>+ size = args->shadow_stack_size;
>+ } else {
>+ /*
>+ * For CLONE_VFORK the child will share the parents
>+ * shadow stack. Make sure to clear the internal
>+ * tracking of the thread shadow stack so the freeing
>+ * logic run for child knows to leave it alone.
>+ */
>+ if (clone_flags & CLONE_VFORK) {
>+ shstk->base = 0;
>+ shstk->size = 0;
>+ return 0;
>+ }
>+
>+ /*
>+ * For !CLONE_VM the child will use a copy of the
>+ * parents shadow stack.
>+ */
>+ if (!(clone_flags & CLONE_VM))
>+ return 0;
>+
>+ size = args->stack_size;
>+
>+ }
>
>- size = adjust_shstk_size(stack_size);
>+ size = adjust_shstk_size(size);
> addr = alloc_shstk(0, size, 0, false);
> if (IS_ERR_VALUE(addr))
> return addr;
>diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
>index a23af225c898..e86a09cfccd8 100644
>--- a/include/linux/sched/task.h
>+++ b/include/linux/sched/task.h
>@@ -41,6 +41,7 @@ struct kernel_clone_args {
> void *fn_arg;
> struct cgroup *cgrp;
> struct css_set *cset;
>+ unsigned long shadow_stack_size;
> };
>
> /*
>diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
>index 3bac0a8ceab2..a998b6d0c897 100644
>--- a/include/uapi/linux/sched.h
>+++ b/include/uapi/linux/sched.h
>@@ -84,6 +84,8 @@
> * kernel's limit of nested PID namespaces.
> * @cgroup: If CLONE_INTO_CGROUP is specified set this to
> * a file descriptor for the cgroup.
>+ * @shadow_stack_size: Specify the size of the shadow stack to allocate
>+ * for the child process.
> *
> * The structure is versioned by size and thus extensible.
> * New struct members must go at the end of the struct and
>@@ -101,12 +103,14 @@ struct clone_args {
> __aligned_u64 set_tid;
> __aligned_u64 set_tid_size;
> __aligned_u64 cgroup;
>+ __aligned_u64 shadow_stack_size;
> };
> #endif
>
> #define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */
> #define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */
> #define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */
>+#define CLONE_ARGS_SIZE_VER3 96 /* sizeof fourth published struct */
>
> /*
> * Scheduling policies
>diff --git a/kernel/fork.c b/kernel/fork.c
>index 10917c3e1f03..35131acd43d2 100644
>--- a/kernel/fork.c
>+++ b/kernel/fork.c
>@@ -121,6 +121,11 @@
> */
> #define MAX_THREADS FUTEX_TID_MASK
>
>+/*
>+ * Require that shadow stacks can store at least one element
>+ */
>+#define SHADOW_STACK_SIZE_MIN 8
nit:
Sorry, should've mentioned it earlier.
Can this be "#define SHADOW_STACK_SIZE_MIN sizeof(unsigned long)"
>+
> /*
> * Protected counters by write_lock_irq(&tasklist_lock)
> */
>@@ -3067,7 +3072,9 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs,
> CLONE_ARGS_SIZE_VER1);
> BUILD_BUG_ON(offsetofend(struct clone_args, cgroup) !=
> CLONE_ARGS_SIZE_VER2);
>- BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER2);
Hi Mark,
Thanks for putting this together and sorry it took me some time to catch
up (well, still not fully, so rather more questions below).
On Tue, Nov 28, 2023 at 06:22:38PM +0000, Mark Brown wrote:
> Since clone3() is readily extensible let's add support for specifying a
> shadow stack when creating a new thread or process in a similar manner
> to how the normal stack is specified, keeping the current implicit
> allocation behaviour if one is not specified either with clone3() or
> through the use of clone(). Unlike normal stacks only the shadow stack
> size is specified, similar issues to those that lead to the creation of
> map_shadow_stack() apply.
My hope when looking at the arm64 patches was that we can completely
avoid the kernel allocation/deallocation of the shadow stack since it
doesn't need to do this for the normal stack either. Could someone
please summarise why we dropped the shadow stack pointer after v1? IIUC
there was a potential security argument but I don't think it was a very
strong one. Also what's the threat model for this feature? I thought
it's mainly mitigating stack corruption. If some rogue code can do
syscalls, we have bigger problems than clone3() taking a shadow stack
pointer.
My (probably wrong) mental model was that libc can do an mmap() for
normal stack, a map_shadow_stack() for the shadow one and invoke
clone3() with both these pointers and sizes. There is an overhead of an
additional syscall but if some high-performance app needs to spawn
threads quickly, it would most likely do some pooling.
I'm not against clone3() getting a shadow_stack_size argument but asking
some more questions. If we won't pass a pointer as well, is there any
advantage in expanding this syscall vs a specific prctl() option? Do we
need a different size per thread or do all threads have the same shadow
stack size? A new RLIMIT doesn't seem to map well though, it is more
like an upper limit rather than a fixed/default size (glibc I think uses
it for thread stacks but bionic or musl don't AFAIK).
Another dumb question on arm64 - is GCSPR_EL0 writeable by the user? If
yes, can the libc wrapper for threads allocate a shadow stack via
map_shadow_stack() and set it up in the thread initialisation handler
before invoking the thread function?
Thanks.
--
Catalin
On Thu, Nov 30, 2023 at 07:00:58PM +0000, Catalin Marinas wrote:
> My hope when looking at the arm64 patches was that we can completely
> avoid the kernel allocation/deallocation of the shadow stack since it
> doesn't need to do this for the normal stack either. Could someone
> please summarise why we dropped the shadow stack pointer after v1? IIUC
> there was a potential security argument but I don't think it was a very
> strong one. Also what's the threat model for this feature? I thought
> it's mainly mitigating stack corruption. If some rogue code can do
> syscalls, we have bigger problems than clone3() taking a shadow stack
> pointer.
As well as preventing/detecting corruption of the in memory stack shadow
stacks are also ensuring that any return instructions are unwinding a
prior call instruction, and that the returns are done in opposite order
to the calls. This forces usage of the stack - any value we attempt to
RET to is going to be checked against the top of the shadow stack which
makes chaining returns together as a substitute for branches harder.
The concern Rick raised was that allowing user to pick the exact shadow
stack pointer would allow userspace to corrupt or reuse the stack of an
existing thread by starting a new thread with the shadow stack pointing
into the existing shadow stack of that thread. While in isolation
that's not too much more than what userspace could just do directly
anyway it might compose with other issues to something more "interesting"
(eg, I'd be a bit concerned about overlap with pkeys/POE though I've not
thought through potential uses in detail).
> I'm not against clone3() getting a shadow_stack_size argument but asking
> some more questions. If we won't pass a pointer as well, is there any
> advantage in expanding this syscall vs a specific prctl() option? Do we
> need a different size per thread or do all threads have the same shadow
> stack size? A new RLIMIT doesn't seem to map well though, it is more
> like an upper limit rather than a fixed/default size (glibc I think uses
> it for thread stacks but bionic or musl don't AFAIK).
I don't know what the userspace patterns are likely to be here, it's
possible a single value for each process might be fine but I couldn't
say that confidently. I agree that a RLIMIT does seem like a poor fit.
As well as the actual configuration of the size the other thing that we
gain is that as well as relying on heuristics to determine if we need to
allocate a new shadow stack for the new thread we allow userspace to
explicitly request a new shadow stack. There was some corner case with
IIRC posix_nspawn() mentioned where the heuristics aren't what we want
for example.
> Another dumb question on arm64 - is GCSPR_EL0 writeable by the user? If
> yes, can the libc wrapper for threads allocate a shadow stack via
> map_shadow_stack() and set it up in the thread initialisation handler
> before invoking the thread function?
No, GCSPR_EL0 can only be changed by EL0 through BL, RET and the
new GCS instructions (push/pop and stack switch). Push is optional -
userspace has to explicitly request that it be enabled and this could be
prevented through seccomp or some other LSM. The stack switch
instructions require a token at the destination address which must
either be written by a higher EL or will be written in the process of
switching away from a stack so you can switch back. Unless I've missed
one every mechanism for userspace to update GCSPR_EL0 will do a GCS
memory access so providing guard pages have been allocated wrapping to a
different stack will be prevented.
We would need a syscall to allow GCSPR_EL0 to be written.
On Thu, 2023-11-30 at 21:51 +0000, Mark Brown wrote:
> On Thu, Nov 30, 2023 at 07:00:58PM +0000, Catalin Marinas wrote:
>
> > My hope when looking at the arm64 patches was that we can
> > completely
> > avoid the kernel allocation/deallocation of the shadow stack since
> > it
> > doesn't need to do this for the normal stack either. Could someone
> > please summarise why we dropped the shadow stack pointer after v1?
> > IIUC
> > there was a potential security argument but I don't think it was a
> > very
> > strong one. Also what's the threat model for this feature? I
> > thought
> > it's mainly mitigating stack corruption. If some rogue code can do
> > syscalls, we have bigger problems than clone3() taking a shadow
> > stack
> > pointer.
>
> As well as preventing/detecting corruption of the in memory stack
> shadow
> stacks are also ensuring that any return instructions are unwinding a
> prior call instruction, and that the returns are done in opposite
> order
> to the calls. This forces usage of the stack - any value we attempt
> to
> RET to is going to be checked against the top of the shadow stack
> which
> makes chaining returns together as a substitute for branches harder.
>
> The concern Rick raised was that allowing user to pick the exact
> shadow
> stack pointer would allow userspace to corrupt or reuse the stack of
> an
> existing thread by starting a new thread with the shadow stack
> pointing
> into the existing shadow stack of that thread. While in isolation
> that's not too much more than what userspace could just do directly
> anyway it might compose with other issues to something more
> "interesting"
> (eg, I'd be a bit concerned about overlap with pkeys/POE though I've
> not
> thought through potential uses in detail).
I think it is open for userspace customization. The kernel tries to
leave the option to lock things down as much as it can (partly because
it's not clear how all the userspace tradeoffs will shake out).
In the past, we had talked about allowing a set SSP (GCSPR) prctl() to
help with some of the compatibility gaps (longjmp() between stacks,
etc). If we loosened things up a bit this could help there, but it kind
of defeats the purpose a little, of the token checking stuff built into
these features at the HW level. A super-stack-canary mode might be nice
for people who just want to flip a switch on existing apps without
checking them, or people who want to do tracing and don't care about
security. But, I also wouldn't be surprised if some high security
applications decide to block map_shadow_stack all together to lock
threads to their own shadow stacks.
So I kind of like leaning towards leaving the option to lock things
down more when we can. Like Mark was getting at, we don't know all the
ways shadow stacks will get attacked yet. So turning it around, why not
let the shadow stack get allocated by the kernel? It makes the kernel
code/complexity smaller, are there any other benefits?
>
> > I'm not against clone3() getting a shadow_stack_size argument but
> > asking
> > some more questions. If we won't pass a pointer as well, is there
> > any
> > advantage in expanding this syscall vs a specific prctl() option?
> > Do we
> > need a different size per thread or do all threads have the same
> > shadow
> > stack size? A new RLIMIT doesn't seem to map well though, it is
> > more
> > like an upper limit rather than a fixed/default size (glibc I think
> > uses
> > it for thread stacks but bionic or musl don't AFAIK).
>
> I don't know what the userspace patterns are likely to be here, it's
> possible a single value for each process might be fine but I couldn't
> say that confidently. I agree that a RLIMIT does seem like a poor
> fit.
>
> As well as the actual configuration of the size the other thing that
> we
> gain is that as well as relying on heuristics to determine if we need
> to
> allocate a new shadow stack for the new thread we allow userspace to
> explicitly request a new shadow stack. There was some corner case
> with
> IIRC posix_nspawn() mentioned where the heuristics aren't what we
> want
> for example.
Can't posix_spawn() pass in a shadow stack size into clone3 to get a
new shadow stack after this series?
>
> > Another dumb question on arm64 - is GCSPR_EL0 writeable by the
> > user? If
> > yes, can the libc wrapper for threads allocate a shadow stack via
> > map_shadow_stack() and set it up in the thread initialisation
> > handler
> > before invoking the thread function?
>
> No, GCSPR_EL0 can only be changed by EL0 through BL, RET and the
> new GCS instructions (push/pop and stack switch). Push is optional -
> userspace has to explicitly request that it be enabled and this could
> be
> prevented through seccomp or some other LSM. The stack switch
> instructions require a token at the destination address which must
> either be written by a higher EL or will be written in the process of
> switching away from a stack so you can switch back. Unless I've
> missed
> one every mechanism for userspace to update GCSPR_EL0 will do a GCS
> memory access so providing guard pages have been allocated wrapping
> to a
> different stack will be prevented.
>
> We would need a syscall to allow GCSPR_EL0 to be written.
I think the problem with doing this is signals. If a signal is
delivered to the new thread, then it could push to the old shadow stack
before userspace gets a chance to switch. So the thread needs to start
on a new shadow/stack.
The 11/30/2023 21:51, Mark Brown wrote:
> The concern Rick raised was that allowing user to pick the exact shadow
> stack pointer would allow userspace to corrupt or reuse the stack of an
> existing thread by starting a new thread with the shadow stack pointing
> into the existing shadow stack of that thread. While in isolation
note that this can be prevented by map_shadow_stack adding
a token that clone3 verifies.
> that's not too much more than what userspace could just do directly
> anyway it might compose with other issues to something more "interesting"
> (eg, I'd be a bit concerned about overlap with pkeys/POE though I've not
> thought through potential uses in detail).
>
> > I'm not against clone3() getting a shadow_stack_size argument but asking
> > some more questions. If we won't pass a pointer as well, is there any
> > advantage in expanding this syscall vs a specific prctl() option? Do we
> > need a different size per thread or do all threads have the same shadow
> > stack size? A new RLIMIT doesn't seem to map well though, it is more
> > like an upper limit rather than a fixed/default size (glibc I think uses
> > it for thread stacks but bionic or musl don't AFAIK).
>
> I don't know what the userspace patterns are likely to be here, it's
> possible a single value for each process might be fine but I couldn't
> say that confidently. I agree that a RLIMIT does seem like a poor fit.
user code can control the thread stack size per thread
and different size per thread happens in practice (even
in the libc e.g. timer_create with SIGEV_THREAD uses
different stack size than the dns resolver helper thread).
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
On Fri, Dec 01, 2023 at 11:50:25AM +0000, Szabolcs Nagy wrote:
> The 11/30/2023 21:51, Mark Brown wrote:
> > The concern Rick raised was that allowing user to pick the exact shadow
> > stack pointer would allow userspace to corrupt or reuse the stack of an
> > existing thread by starting a new thread with the shadow stack pointing
> > into the existing shadow stack of that thread. While in isolation
> note that this can be prevented by map_shadow_stack adding
> a token that clone3 verifies.
That would make it impossible to reuse the shadow stack once the token
is overwritten which does move the needle more towards making doing the
mapping separately pure overhead.
On Thu, Nov 30, 2023 at 11:37:42PM +0000, Edgecombe, Rick P wrote:
> On Thu, 2023-11-30 at 21:51 +0000, Mark Brown wrote:
> > On Thu, Nov 30, 2023 at 07:00:58PM +0000, Catalin Marinas wrote:
> > explicitly request a new shadow stack.? There was some corner case
> > with
> > IIRC posix_nspawn() mentioned where the heuristics aren't what we
> > want
> > for example.
> Can't posix_spawn() pass in a shadow stack size into clone3 to get a
> new shadow stack after this series?
Yes, the above was addressing Catalin's suggestion that we add stack
size control separately to clone3() instead - doing that would remove
the ability to explicitly request a new stack unless we add a flag to
clone3() at which point we're back to modifying clone3() anyway.
> > > Another dumb question on arm64 - is GCSPR_EL0 writeable by the
> > > user? If
> > > yes, can the libc wrapper for threads allocate a shadow stack via
> > > map_shadow_stack() and set it up in the thread initialisation
> > > handler
> > > before invoking the thread function?
> > We would need a syscall to allow GCSPR_EL0 to be written.
> I think the problem with doing this is signals. If a signal is
> delivered to the new thread, then it could push to the old shadow stack
> before userspace gets a chance to switch. So the thread needs to start
> on a new shadow/stack.
That's an issue, plus using a syscall just wouldn't work with a security
model that locked down writes to the pointer which does seem like
something people would reasonably want to deploy.
Thanks all for the clarification.
On Thu, Nov 30, 2023 at 09:51:04PM +0000, Mark Brown wrote:
> On Thu, Nov 30, 2023 at 07:00:58PM +0000, Catalin Marinas wrote:
> > My hope when looking at the arm64 patches was that we can completely
> > avoid the kernel allocation/deallocation of the shadow stack since it
> > doesn't need to do this for the normal stack either. Could someone
> > please summarise why we dropped the shadow stack pointer after v1? IIUC
> > there was a potential security argument but I don't think it was a very
> > strong one. Also what's the threat model for this feature? I thought
> > it's mainly mitigating stack corruption. If some rogue code can do
> > syscalls, we have bigger problems than clone3() taking a shadow stack
> > pointer.
>
> As well as preventing/detecting corruption of the in memory stack shadow
> stacks are also ensuring that any return instructions are unwinding a
> prior call instruction, and that the returns are done in opposite order
> to the calls. This forces usage of the stack - any value we attempt to
> RET to is going to be checked against the top of the shadow stack which
> makes chaining returns together as a substitute for branches harder.
>
> The concern Rick raised was that allowing user to pick the exact shadow
> stack pointer would allow userspace to corrupt or reuse the stack of an
> existing thread by starting a new thread with the shadow stack pointing
> into the existing shadow stack of that thread. While in isolation
> that's not too much more than what userspace could just do directly
> anyway it might compose with other issues to something more "interesting"
> (eg, I'd be a bit concerned about overlap with pkeys/POE though I've not
> thought through potential uses in detail).
Another concern I had was that map_shadow_stack() currently takes
a flags arg (though only one flag) while the clone/clone3() allocate the
shadow stack with an implicit configuration (other than size). Would
map_shadow_stack() ever get new flags that we may also need to set on
the default thread shadow stack (e.g. a new permission type)? At that
point it would be better if clone3() allowed a shadow stack pointer so
that any specific attributes would be limited to map_shadow_stack().
If that's only theoretical, I'm fine to go ahead with a size-only
argument for clone3(). We could also add the pointer now and allocate
the stack if NULL or reuse it if not, maybe with some prctl to allow
this. It might be overengineering and we'd never use such feature
though.
> > I'm not against clone3() getting a shadow_stack_size argument but asking
> > some more questions. If we won't pass a pointer as well, is there any
> > advantage in expanding this syscall vs a specific prctl() option? Do we
> > need a different size per thread or do all threads have the same shadow
> > stack size? A new RLIMIT doesn't seem to map well though, it is more
> > like an upper limit rather than a fixed/default size (glibc I think uses
> > it for thread stacks but bionic or musl don't AFAIK).
>
> I don't know what the userspace patterns are likely to be here, it's
> possible a single value for each process might be fine but I couldn't
> say that confidently. I agree that a RLIMIT does seem like a poor fit.
Szabolcs clarified that there are cases where we need the size per
thread.
> As well as the actual configuration of the size the other thing that we
> gain is that as well as relying on heuristics to determine if we need to
> allocate a new shadow stack for the new thread we allow userspace to
> explicitly request a new shadow stack.
But the reverse is not true - we can't use clone3() to create a thread
without a shadow stack AFAICT.
> > Another dumb question on arm64 - is GCSPR_EL0 writeable by the user? If
> > yes, can the libc wrapper for threads allocate a shadow stack via
> > map_shadow_stack() and set it up in the thread initialisation handler
> > before invoking the thread function?
>
> No, GCSPR_EL0 can only be changed by EL0 through BL, RET and the
> new GCS instructions (push/pop and stack switch). Push is optional -
> userspace has to explicitly request that it be enabled and this could be
> prevented through seccomp or some other LSM. The stack switch
> instructions require a token at the destination address which must
> either be written by a higher EL or will be written in the process of
> switching away from a stack so you can switch back. Unless I've missed
> one every mechanism for userspace to update GCSPR_EL0 will do a GCS
> memory access so providing guard pages have been allocated wrapping to a
> different stack will be prevented.
>
> We would need a syscall to allow GCSPR_EL0 to be written.
Good point, I thought I must be missing something.
--
Catalin
On Fri, Dec 01, 2023 at 05:30:22PM +0000, Catalin Marinas wrote:
> Another concern I had was that map_shadow_stack() currently takes
> a flags arg (though only one flag) while the clone/clone3() allocate the
> shadow stack with an implicit configuration (other than size). Would
> map_shadow_stack() ever get new flags that we may also need to set on
> the default thread shadow stack (e.g. a new permission type)? At that
> point it would be better if clone3() allowed a shadow stack pointer so
> that any specific attributes would be limited to map_shadow_stack().
The flags argument currently only lets you specify if a stack switch
token should be written (which is not relevant for the clone3() case)
and if a top of stack marker should be included (which since the top of
stack marker is NULL for arm64 only has perceptible effect if a token is
being written). I'm not particularly anticipating any further additions,
though never say never.
> If that's only theoretical, I'm fine to go ahead with a size-only
> argument for clone3(). We could also add the pointer now and allocate
> the stack if NULL or reuse it if not, maybe with some prctl to allow
> this. It might be overengineering and we'd never use such feature
> though.
Yeah, it seems like a bunch of work and interface to test that I'm not
convinced anyone would actually use.
> > As well as the actual configuration of the size the other thing that we
> > gain is that as well as relying on heuristics to determine if we need to
> > allocate a new shadow stack for the new thread we allow userspace to
> > explicitly request a new shadow stack.
> But the reverse is not true - we can't use clone3() to create a thread
> without a shadow stack AFAICT.
Right. Given the existing implicit allocation only x86 ABI we'd need to
retrofit that by adding an explicit "no shadow stack" flag. That is
possible though I'm having a hard time seeing the use case for it.
On Tue, 2023-11-28 at 18:22 +0000, Mark Brown wrote:
> +
> +#define ENABLE_SHADOW_STACK
> +static inline void enable_shadow_stack(void)
> +{
> + int ret = ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK);
> + if (ret == 0)
> + shadow_stack_enabled = true;
> +}
> +
> +#endif
> +
> +#ifndef ENABLE_SHADOW_STACK
> +static void enable_shadow_stack(void)
> +{
> +}
> +#endif
Without this diff, the test crashed for me on a shadow stack system:
diff --git a/tools/testing/selftests/clone3/clone3.c
b/tools/testing/selftests/clone3/clone3.c
index dbe52582573c..3236d97ed261 100644
--- a/tools/testing/selftests/clone3/clone3.c
+++ b/tools/testing/selftests/clone3/clone3.c
@@ -423,7 +423,7 @@ static const struct test tests[] = {
})
#define ENABLE_SHADOW_STACK
-static inline void enable_shadow_stack(void)
+static inline __attribute__((always_inline)) void
enable_shadow_stack(void)
{
int ret = ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK);
if (ret == 0)
The fix works by making sure control flow never returns to before the
point shadow stack was enabled. Otherwise it will underflow the shadow
stack.
But I wonder if the clone3 test should get its shadow stack enabled the
conventional elf bit way. So if it's all there (HW, kernel, glibc) then
the test will run with shadow stack. Otherwise the test will run
without shadow stack.
The other reason is that the shadow stack test in the x86 selftest
manual enabling is designed to work without a shadow stack enabled
glibc and has to be specially crafted to work around the missing
support. I'm not sure the more generic selftests should have to know
how to do this. So what about something like this instead:
diff --git a/tools/testing/selftests/clone3/Makefile
b/tools/testing/selftests/clone3/Makefile
index 84832c369a2e..792bc9685c82 100644
--- a/tools/testing/selftests/clone3/Makefile
+++ b/tools/testing/selftests/clone3/Makefile
@@ -2,6 +2,13 @@
CFLAGS += -g -std=gnu99 $(KHDR_INCLUDES)
LDLIBS += -lcap
+ifeq ($(shell uname -m),x86_64)
+CAN_BUILD_WITH_SHSTK := $(shell ../x86/check_cc.sh gcc
../x86/trivial_program.c -mshstk -fcf-protection)
+ifeq ($(CAN_BUILD_WITH_SHSTK),1)
+CFLAGS += -mshstk -fcf-protection=return
+endif
+endif
+
TEST_GEN_PROGS := clone3 clone3_clear_sighand clone3_set_tid \
clone3_cap_checkpoint_restore
diff --git a/tools/testing/selftests/clone3/clone3.c
b/tools/testing/selftests/clone3/clone3.c
index dbe52582573c..eff5e8d5a5a6 100644
--- a/tools/testing/selftests/clone3/clone3.c
+++ b/tools/testing/selftests/clone3/clone3.c
@@ -23,7 +23,6 @@
#include "clone3_selftests.h"
static bool shadow_stack_enabled;
-static bool shadow_stack_supported;
static size_t max_supported_args_size;
enum test_mode {
@@ -50,36 +49,6 @@ struct test {
filter_function filter;
};
-#ifndef __NR_map_shadow_stack
-#define __NR_map_shadow_stack 453
-#endif
-
-/*
- * We check for shadow stack support by attempting to use
- * map_shadow_stack() since features may have been locked by the
- * dynamic linker resulting in spurious errors when we attempt to
- * enable on startup. We warn if the enable failed.
- */
-static void test_shadow_stack_supported(void)
-{
- long shadow_stack;
-
- shadow_stack = syscall(__NR_map_shadow_stack, 0, getpagesize(),
0);
- if (shadow_stack == -1) {
- ksft_print_msg("map_shadow_stack() not supported\n");
- } else if ((void *)shadow_stack == MAP_FAILED) {
- ksft_print_msg("Failed to map shadow stack\n");
- } else {
- ksft_print_msg("Shadow stack supportd\n");
- shadow_stack_supported = true;
-
- if (!shadow_stack_enabled)
- ksft_print_msg("Mapped but did not enable
shadow stack\n");
-
- munmap((void *)shadow_stack, getpagesize());
- }
-}
-
static int call_clone3(uint64_t flags, size_t size, enum test_mode
test_mode)
{
struct __clone_args args = {
@@ -220,7 +189,7 @@ static bool no_timenamespace(void)
static bool have_shadow_stack(void)
{
- if (shadow_stack_supported) {
+ if (shadow_stack_enabled) {
ksft_print_msg("Shadow stack supported\n");
return true;
}
@@ -230,7 +199,7 @@ static bool have_shadow_stack(void)
static bool no_shadow_stack(void)
{
- if (!shadow_stack_supported) {
+ if (!shadow_stack_enabled) {
ksft_print_msg("Shadow stack not supported\n");
return true;
}
@@ -402,38 +371,18 @@ static const struct test tests[] = {
};
#ifdef __x86_64__
-#define ARCH_SHSTK_ENABLE 0x5001
+#define ARCH_SHSTK_STATUS 0x5005
#define ARCH_SHSTK_SHSTK (1ULL << 0)
-#define ARCH_PRCTL(arg1, arg2) \
-({ \
- long _ret; \
- register long _num asm("eax") = __NR_arch_prctl; \
- register long _arg1 asm("rdi") = (long)(arg1); \
- register long _arg2 asm("rsi") = (long)(arg2); \
- \
- asm volatile ( \
- "syscall\n" \
- : "=a"(_ret) \
- : "r"(_arg1), "r"(_arg2), \
- "0"(_num) \
- : "rcx", "r11", "memory", "cc" \
- ); \
- _ret; \
-})
-
-#define ENABLE_SHADOW_STACK
-static inline void enable_shadow_stack(void)
+static inline __attribute__((always_inline)) void
check_shadow_stack(void)
{
- int ret = ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK);
- if (ret == 0)
- shadow_stack_enabled = true;
+ unsigned long status = 0;
+
+ syscall(SYS_arch_prctl, ARCH_SHSTK_STATUS, &status);
+ shadow_stack_enabled = status & ARCH_SHSTK_SHSTK;
}
-
-#endif
-
-#ifndef ENABLE_SHADOW_STACK
-static void enable_shadow_stack(void)
+#else /* __x86_64__ */
+static void check_shadow_stack(void)
{
}
#endif
@@ -443,12 +392,11 @@ int main(int argc, char *argv[])
size_t size;
int i;
- enable_shadow_stack();
+ check_shadow_stack();
ksft_print_header();
ksft_set_plan(ARRAY_SIZE(tests));
test_clone3_supported();
- test_shadow_stack_supported();
for (i = 0; i < ARRAY_SIZE(tests); i++)
test_clone3(&tests[i]);
On Tue, 2023-11-28 at 18:22 +0000, Mark Brown wrote:
> -unsigned long shstk_alloc_thread_stack(struct task_struct *tsk,
> unsigned long clone_flags,
> - unsigned long stack_size)
> +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk,
> + const struct kernel_clone_args
> *args)
> {
> struct thread_shstk *shstk = &tsk->thread.shstk;
> + unsigned long clone_flags = args->flags;
> unsigned long addr, size;
>
> /*
> * If shadow stack is not enabled on the new thread, skip any
> - * switch to a new shadow stack.
> + * implicit switch to a new shadow stack and reject attempts
> to
> + * explciitly specify one.
> */
> - if (!features_enabled(ARCH_SHSTK_SHSTK))
> - return 0;
> + if (!features_enabled(ARCH_SHSTK_SHSTK)) {
> + if (args->shadow_stack_size)
> + return (unsigned long)ERR_PTR(-EINVAL);
>
> - /*
> - * For CLONE_VFORK the child will share the parents shadow
> stack.
> - * Make sure to clear the internal tracking of the thread
> shadow
> - * stack so the freeing logic run for child knows to leave it
> alone.
> - */
> - if (clone_flags & CLONE_VFORK) {
> - shstk->base = 0;
> - shstk->size = 0;
> return 0;
> }
>
> /*
> - * For !CLONE_VM the child will use a copy of the parents
> shadow
> - * stack.
> + * If the user specified a shadow stack then do some basic
> + * validation and use it, otherwise fall back to a default
> + * shadow stack size if the clone_flags don't indicate an
> + * allocation is unneeded.
> */
> - if (!(clone_flags & CLONE_VM))
> - return 0;
> + if (args->shadow_stack_size) {
> + size = args->shadow_stack_size;
> + } else {
> + /*
> + * For CLONE_VFORK the child will share the parents
> + * shadow stack. Make sure to clear the internal
> + * tracking of the thread shadow stack so the freeing
> + * logic run for child knows to leave it alone.
> + */
> + if (clone_flags & CLONE_VFORK) {
> + shstk->base = 0;
> + shstk->size = 0;
> + return 0;
> + }
> +
> + /*
> + * For !CLONE_VM the child will use a copy of the
> + * parents shadow stack.
> + */
> + if (!(clone_flags & CLONE_VM))
> + return 0;
> +
> + size = args->stack_size;
> +
> + }
>
> - size = adjust_shstk_size(stack_size);
> + size = adjust_shstk_size(size);
> addr = alloc_shstk(0, size, 0, false);
Hmm. I didn't test this, but in the copy_process(), copy_mm() happens
before this point. So the shadow stack would get mapped in current's MM
(i.e. the parent). So in the !CLONE_VM case with shadow_stack_size!=0
the SSP in the child will be updated to an area that is not mapped in
the child. I think we need to pass tsk->mm into alloc_shstk(). But such
an exotic clone usage does give me pause, regarding whether all of this
is premature.
Otherwise it looked ok from the x86/shstk perspective.
> if (IS_ERR_VALUE(addr))
> return addr;
On Tue, Dec 05, 2023 at 12:10:20AM +0000, Edgecombe, Rick P wrote:
> Without this diff, the test crashed for me on a shadow stack system:
> -static inline void enable_shadow_stack(void)
> +static inline __attribute__((always_inline)) void
doh.
> But I wonder if the clone3 test should get its shadow stack enabled the
> conventional elf bit way. So if it's all there (HW, kernel, glibc) then
> the test will run with shadow stack. Otherwise the test will run
> without shadow stack.
This creates bootstrapping issues if we do it for arm64 where nothing is
merged yet except for the model and EL3 support - in order to get any
test coverage you need to be using an OS with the libc and toolchain
support available and that's not going to be something we can rely on
for a while (and even when things are merged a lot of the CI systems use
Debian). There is a small risk that the toolchain will generate
incompatible code if it doesn't know it's specifically targetting shadow
stacks but the toolchain people didn't seem concerned about that risk
and we've not been running into problems.
It looks x86 is in better shape here with the userspace having run ahead
of the kernel support though I'm not 100% clear if everything is fully
lined up? -mshstk -fcf-protection appears to build fine with gcc 8 but
I'm a bit less clear on glibc and any ABI variations.
> The other reason is that the shadow stack test in the x86 selftest
> manual enabling is designed to work without a shadow stack enabled
> glibc and has to be specially crafted to work around the missing
> support. I'm not sure the more generic selftests should have to know
> how to do this. So what about something like this instead:
What's the issue with working around the missing support? My
understanding was that there should be no ill effects from repeated
attempts to enable. We could add a check for things already being
enabled
On Tue, Dec 05, 2023 at 12:26:57AM +0000, Edgecombe, Rick P wrote:
> On Tue, 2023-11-28 at 18:22 +0000, Mark Brown wrote:
> > -???????size = adjust_shstk_size(stack_size);
> > +???????size = adjust_shstk_size(size);
> > ????????addr = alloc_shstk(0, size, 0, false);
> Hmm. I didn't test this, but in the copy_process(), copy_mm() happens
> before this point. So the shadow stack would get mapped in current's MM
> (i.e. the parent). So in the !CLONE_VM case with shadow_stack_size!=0
> the SSP in the child will be updated to an area that is not mapped in
> the child. I think we need to pass tsk->mm into alloc_shstk(). But such
> an exotic clone usage does give me pause, regarding whether all of this
> is premature.
Hrm, right. And we then can't use do_mmap() either. I'd be somewhat
tempted to disallow that specific case for now rather than deal with it
though that's not really in the spirit of just always following what the
user asked for.
On Tue, 2023-12-05 at 15:05 +0000, Mark Brown wrote:
> > But I wonder if the clone3 test should get its shadow stack enabled
> > the
> > conventional elf bit way. So if it's all there (HW, kernel, glibc)
> > then
> > the test will run with shadow stack. Otherwise the test will run
> > without shadow stack.
>
> This creates bootstrapping issues if we do it for arm64 where nothing
> is
> merged yet except for the model and EL3 support - in order to get any
> test coverage you need to be using an OS with the libc and toolchain
> support available and that's not going to be something we can rely on
> for a while (and even when things are merged a lot of the CI systems
> use
> Debian). There is a small risk that the toolchain will generate
> incompatible code if it doesn't know it's specifically targetting
> shadow
> stacks but the toolchain people didn't seem concerned about that risk
> and we've not been running into problems.
>
> It looks x86 is in better shape here with the userspace having run
> ahead
> of the kernel support though I'm not 100% clear if everything is
> fully
> lined up? -mshstk -fcf-protection appears to build fine with gcc 8
> but
> I'm a bit less clear on glibc and any ABI variations.
Right, you would need a shadow stack enabled compiler too. The
check_cc.sh piece in the Makefile will detect that.
Hmm, I didn't realize you were planning to have the kernel support
upstream before the libc support was in testable shape.
>
> > The other reason is that the shadow stack test in the x86 selftest
> > manual enabling is designed to work without a shadow stack enabled
> > glibc and has to be specially crafted to work around the missing
> > support. I'm not sure the more generic selftests should have to
> > know
> > how to do this. So what about something like this instead:
>
> What's the issue with working around the missing support? My
> understanding was that there should be no ill effects from repeated
> attempts to enable. We could add a check for things already being
> enabled
Normally the loader enables shadow stack and glibc then knows to do
things in special ways when it is successful. If it instead manually
enables in the app:
- The app can't return from main() without disabling shadow stack
beforehand. Luckily this test directly calls exit()
- The app can't do longjmp()
- The app can't do ucontext stuff
- The enabling code needs to be carefully crafted (the inline problem
you hit)
I guess it's not a huge list, and mostly tests will run ok. But it
doesn't seem right to add somewhat hacky shadow stack crud into generic
tests.
So you were planning to enable GCS in this test manually as well? How
many tests were you planning to add it like this?
On Tue, Dec 05, 2023 at 04:01:50PM +0000, Edgecombe, Rick P wrote:
> Hmm, I didn't realize you were planning to have the kernel support
> upstream before the libc support was in testable shape.
It's not a "could someone run it" thing - it's about trying ensure that
we get coverage from people who are just running the selftests as part
of general testing coverage rather than with the specific goal of
testing this one feature. Even when things start to land there will be
a considerable delay before they filter out so that all the enablement
is in CI systems off the shelf and it'd be good to have coverage in that
interval.
> > What's the issue with working around the missing support?? My
> > understanding was that there should be no ill effects from repeated
> > attempts to enable.? We could add a check for things already being
> > enabled
> Normally the loader enables shadow stack and glibc then knows to do
> things in special ways when it is successful. If it instead manually
> enables in the app:
> - The app can't return from main() without disabling shadow stack?
> beforehand. Luckily this test directly calls exit()
> - The app can't do longjmp()
> - The app can't do ucontext stuff
> - The enabling code needs to be carefully crafted (the inline problem?
> you hit)
> I guess it's not a huge list, and mostly tests will run ok. But it
> doesn't seem right to add somewhat hacky shadow stack crud into generic
> tests.
Right, it's a small and fairly easily auditable list - it's more about
the app than the double enable which was what I thought your concern
was. It's a bit annoying definitely and not something we want to do in
general but for something like this where we're adding specific coverage
for API extensions for the feature it seems like a reasonable tradeoff.
If the x86 toolchain/libc support is widely enough deployed (or you just
don't mind any missing coverage) we could use the toolchain support
there and only have the manual enable for arm64, it'd be inconsistent
but not wildly so.
> So you were planning to enable GCS in this test manually as well? How
> many tests were you planning to add it like this?
Yes, the current version of the arm64 series has the equivalent support
for GCS. I was only planning to do this along with adding specific
coverage for shadow stacks/GCS, general stuff that doesn't have any
specific support can get covered as part of system testing with the
toolchain and libc support.
The only case beyond that I've done is some arm64 specific stress tests
which are written as standalone assembler programs, those wouldn't get
enabled by the toolchain anyway and have some chance of catching context
switch or signal handling issues should they occur. It seemed worth it
for the few lines of assembly it takes.
On Tue, 2023-12-05 at 15:51 +0000, Mark Brown wrote:
> On Tue, Dec 05, 2023 at 12:26:57AM +0000, Edgecombe, Rick P wrote:
> > On Tue, 2023-11-28 at 18:22 +0000, Mark Brown wrote:
>
> > > - size = adjust_shstk_size(stack_size);
> > > + size = adjust_shstk_size(size);
> > > addr = alloc_shstk(0, size, 0, false);
>
> > Hmm. I didn't test this, but in the copy_process(), copy_mm()
> > happens
> > before this point. So the shadow stack would get mapped in
> > current's MM
> > (i.e. the parent). So in the !CLONE_VM case with
> > shadow_stack_size!=0
> > the SSP in the child will be updated to an area that is not mapped
> > in
> > the child. I think we need to pass tsk->mm into alloc_shstk(). But
> > such
> > an exotic clone usage does give me pause, regarding whether all of
> > this
> > is premature.
>
> Hrm, right. And we then can't use do_mmap() either. I'd be somewhat
> tempted to disallow that specific case for now rather than deal with
> it
> though that's not really in the spirit of just always following what
> the
> user asked for.
Oh, yea. What a pain. It doesn't seem like we could easily even add a
do_mmap() variant that takes an mm either.
I did a quick logging test on a Fedora userspace. systemd (I think)
appears to do a clone(!CLONE_VM) with a stack passed. So maybe the
combo might actually get used with a shadow_stack_size if it used
clone3 some day. At the same time, fixing clone to mmap() in the child
doesn't seem straight forward at all. Checking with some of our MM
folks, the suggestion was to look at doing the child's shadow stack
mapping in dup_mm() to avoid tripping over complications that happen
when a remote MM becomes more "live".
If we just punt on this combination for now, then the documented rules
for args->shadow_stack_size would be something like:
clone3 will use the parents shadow stack when CLONE_VM is not present.
If CLONE_VFORK is set then it will use the parents shadow stack only
when args->shadow_stack_size is non-zero. In the cases when the parents
shadow stack is not used, args->shadow_stack_size is used for the size
whenever non-zero.
I guess it doesn't seem too overly complicated. But I'm not thinking
any of the options seem great. I'd unhappily lean towards not
supporting shadow_stack_size!=0 && !CLONE_VM for now. But it seems like
there may be a user for the unsupported case, so this would be just
improving things a little and kicking the can down the road. I also
wonder if this is a sign to reconsider the earlier token consuming
design.
On Tue, 2023-12-05 at 16:43 +0000, Mark Brown wrote:
> Right, it's a small and fairly easily auditable list - it's more
> about
> the app than the double enable which was what I thought your concern
> was. It's a bit annoying definitely and not something we want to do
> in
> general but for something like this where we're adding specific
> coverage
> for API extensions for the feature it seems like a reasonable
> tradeoff.
>
> If the x86 toolchain/libc support is widely enough deployed (or you
> just
> don't mind any missing coverage) we could use the toolchain support
> there and only have the manual enable for arm64, it'd be inconsistent
> but not wildly so.
>
>
>
I'm hoping there is not too much of a gap before the glibc support
starts filtering out. Long term, elf bit enabling is probably the right
thing for the generic tests. Short term, manual enabling is ok with me
if no one else minds. Maybe we could add my "don't do" list as a
comment if we do manual enabling?
I'll have to check your new series, but I also wonder if we could cram
the manual enabling and status checking pieces into some headers and
not have to have "if x86" "if arm" logic in the test themselves.
On Tue, Dec 05, 2023 at 10:23:08PM +0000, Edgecombe, Rick P wrote:
> On Tue, 2023-12-05 at 15:51 +0000, Mark Brown wrote:
> > Hrm, right.? And we then can't use do_mmap() either.? I'd be somewhat
> > tempted to disallow that specific case for now rather than deal with
> > it
> > though that's not really in the spirit of just always following what
> > the
> > user asked for.
> Oh, yea. What a pain. It doesn't seem like we could easily even add a
> do_mmap() variant that takes an mm either.
> I did a quick logging test on a Fedora userspace. systemd (I think)
> appears to do a clone(!CLONE_VM) with a stack passed. So maybe the
> combo might actually get used with a shadow_stack_size if it used
> clone3 some day. At the same time, fixing clone to mmap() in the child
> doesn't seem straight forward at all. Checking with some of our MM
> folks, the suggestion was to look at doing the child's shadow stack
> mapping in dup_mm() to avoid tripping over complications that happen
> when a remote MM becomes more "live".
Yeah, I can't see anything that looks particularly tasteful.
> If we just punt on this combination for now, then the documented rules
> for args->shadow_stack_size would be something like:
> clone3 will use the parents shadow stack when CLONE_VM is not present.
> If CLONE_VFORK is set then it will use the parents shadow stack only
> when args->shadow_stack_size is non-zero. In the cases when the parents
> shadow stack is not used, args->shadow_stack_size is used for the size
> whenever non-zero.
> I guess it doesn't seem too overly complicated. But I'm not thinking
> any of the options seem great. I'd unhappily lean towards not
Indeed, it's all really hard to get enthusiastic about.
> supporting shadow_stack_size!=0 && !CLONE_VM for now. But it seems like
> there may be a user for the unsupported case, so this would be just
> improving things a little and kicking the can down the road. I also
> wonder if this is a sign to reconsider the earlier token consuming
> design.
In the case where we have !CLONE_VM it should actually possible to reuse
the token (since the user is in at least some sense the child process
rather than the parent) so it's less pure overhead, providing you don't
mind the children of a given parent all using the same addresses for
their initial shadow stack.
I'll have a poke at the various options and come up with something,
hopefully this month but it's getting a bit busy so might be early
next year instead.
On Tue, Dec 05, 2023 at 10:31:09PM +0000, Edgecombe, Rick P wrote:
> On Tue, 2023-12-05 at 16:43 +0000, Mark Brown wrote:
> > If the x86 toolchain/libc support is widely enough deployed (or you
> > just
> > don't mind any missing coverage) we could use the toolchain support
> > there and only have the manual enable for arm64, it'd be inconsistent
> > but not wildly so.
> I'm hoping there is not too much of a gap before the glibc support
> starts filtering out. Long term, elf bit enabling is probably the right
> thing for the generic tests. Short term, manual enabling is ok with me
> if no one else minds. Maybe we could add my "don't do" list as a
> comment if we do manual enabling?
Probably good to write it up somewhere, yes - it'd also be useful for
anyone off doing their own non-libc things. It did cross my mind to
try to make a document for the generic bit of the ABI for shadow stacks.
> I'll have to check your new series, but I also wonder if we could cram
> the manual enabling and status checking pieces into some headers and
> not have to have "if x86" "if arm" logic in the test themselves.
I did think about that but was worried that a header might encourage
more users doing the hacky thing. OTOH it would mean the arch specific
tests could share the header though so perhaps you're right, I'll take a
look.
On Wed, 29 Nov 2023 at 07:31, Mark Brown <[email protected]> wrote:
> Since clone3() is readily extensible let's add support for specifying a
> shadow stack when creating a new thread or process in a similar manner
> to how the normal stack is specified, keeping the current implicit
> allocation behaviour if one is not specified either with clone3() or
> through the use of clone(). Unlike normal stacks only the shadow stack
> size is specified, similar issues to those that lead to the creation of
> map_shadow_stack() apply.
rr (https://rr-project.org) records program execution and then reruns
it with exactly the same behavior (down to memory contents and
register values). To replay clone() etc in an application using shadow
stacks, we'll need to be able to ensure the shadow stack is mapped at
the same address during the replay run as during the recording run. We
ptrace the replay tasks and have the ability to execute arbitrary
syscalls in them. It sounds like we might be able to make this work by
overriding clone_args::shadow_stack_size to zero in the call to
clone3(), instead having the replay task call map_shadow_stack() to
put the the shadow stack in the right place, and then setting its SSP
via ptrace. Will that work?
Thanks,
Rob
--
Su ot deraeppa sah dna Rehtaf eht htiw saw hcihw, efil lanrete eht uoy
ot mialcorp ew dna, ti ot yfitset dna ti nees evah ew; deraeppa efil
eht. Efil fo Drow eht gninrecnoc mialcorp ew siht - dehcuot evah sdnah
ruo dna ta dekool evah ew hcihw, seye ruo htiw nees evah ew hcihw,
draeh evah ew hcihw, gninnigeb eht morf saw hcihw taht.
On Sat, Dec 09, 2023 at 01:59:16PM +1300, Robert O'Callahan wrote:
> overriding clone_args::shadow_stack_size to zero in the call to
> clone3(), instead having the replay task call map_shadow_stack() to
> put the the shadow stack in the right place, and then setting its SSP
> via ptrace. Will that work?
That should work with the interface in the current series, yes.