Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp234437rdh; Thu, 23 Nov 2023 02:29:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IFXGmIkY8ZDutWf8dqx1q+y2jqD3I8S4mnxDIMVNpHp7t4b5JM3ylB5FGNUWDvwDY8BLf1V X-Received: by 2002:a05:6a00:8e04:b0:6cb:8a0c:292a with SMTP id io4-20020a056a008e0400b006cb8a0c292amr5265381pfb.32.1700735356680; Thu, 23 Nov 2023 02:29:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700735356; cv=none; d=google.com; s=arc-20160816; b=VAtz8zuS3sIz19mB8qPwHf43MzEsBJMTxIDMeS/zSe4bw1LuUKKC4NxrnQIM5nwkhw GELTH5iWhVAMqeCiwJH0bVFcVpVHiJ8PDiyN3wMzxx4J3ejxsPUSX9GFDbP82sT26ew+ GtqLNHD9I0yAErStvq/Ol/pupSQR0nKdgHIlmj5M+LfWDHhgppnoGCjXSYgl2hFXUflA F17tJ6K1jm7PsAHf6Z3hCQhMBZVs5zJk334C1igvAkESx3oiXugCespPtvJ5mrz/ZHWN tgUHRLllv2z4qJH7wdnrnR1oXMlRMyxbvouczRyf5CVF1MRMkwhlAevKJuDwxBjfzygR WUfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=XeV1+QfHcswe7BG6168Jku0XBAd2ysw5C7RYWTDH1Ts=; fh=i0Kfh4/KVPfuL+AiuIvf6sZqspsCC3D5lsk/4ERo5F8=; b=rkuHOr8ssbq+it53u5MWSYmMou27e4tTzDXpxWw9TxX5qI5j7h/S4jNFN/1iAK8gZG mvX+1DEX6qBmedGpPLbYTYpMn0uWFTwA3oGKK2ynledHVHT6ws6gn93VGHo96lzviZ1K FpcM0+XBkSqhXBq7/TE/oJDeGc9+8chxvW0nSvqCXj+AqA+5Q8sN0Vbp701vpIpVk7Ol h9H4Z2dUaNHmgTpLrJrndyXRvgKFqgICRElaaYn1A/hRxDVSNdxTGr+F8knawAPZiBqu 1LP+CPBXUVjBklx5NLew5zmNrnjMJHMViriUowJvSZXiATlZj0QAfcrv4Y+cMn1sK77v KacA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=WoXweuAQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id m2-20020a056a00080200b006c33a1be028si1010237pfk.87.2023.11.23.02.29.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 02:29:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=WoXweuAQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 81B2A8082CFB; Thu, 23 Nov 2023 02:29:08 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344448AbjKWK2x (ORCPT + 99 others); Thu, 23 Nov 2023 05:28:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229477AbjKWK2v (ORCPT ); Thu, 23 Nov 2023 05:28:51 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 031AEA2 for ; Thu, 23 Nov 2023 02:28:58 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A234DC433C7; Thu, 23 Nov 2023 10:28:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700735337; bh=YhnOgKb2HAgfRZJlLItv/GqeZsKs3OM8UHl1ijMKxjM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=WoXweuAQESjvyceY8KEXy9vCgVDJkmCItfhpir1XxYQcnSSejtSjt13FT3Convr5U ONbXwgvG73PvCRDQ1IMXL7tFGiL1eMSucELyRykTDVB/b2YPZnUesA9rIP6PwTVYQn CoE+cpRlX29/eUf0oOIQeEEaOVKNWXX2XaSYtgl3S9E1G/bSsC8C5nRgULd0/Pc2Xh nLfNaZFeALOnbTOOV6YyihvwNox5lHj8nlj/uEicsDm8TcAeSpRU+uoYgnylZmUPdE j7RAdjICN0gKyZKWhki8TLcmNct8xSSWUZEnY0P642dS6l2S1e+/7leDr8y3gQF1yp cDWcg8BfeBxcQ== Date: Thu, 23 Nov 2023 11:28:47 +0100 From: Christian Brauner To: Mark Brown Cc: "Rick P. Edgecombe" , Deepak Gupta , Szabolcs Nagy , "H.J. Lu" , Florian Weimer , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Shuah Khan , linux-kernel@vger.kernel.org, Catalin Marinas , Will Deacon , Kees Cook , jannh@google.com, linux-kselftest@vger.kernel.org, linux-api@vger.kernel.org Subject: Re: [PATCH RFT v3 2/5] fork: Add shadow stack support to clone3() Message-ID: <20231123-derivate-freikarte-6de8984caf85@brauner> References: <20231120-clone3-shadow-stack-v3-0-a7b8ed3e2acc@kernel.org> <20231120-clone3-shadow-stack-v3-2-a7b8ed3e2acc@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20231120-clone3-shadow-stack-v3-2-a7b8ed3e2acc@kernel.org> X-Spam-Status: No, score=-1.3 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Thu, 23 Nov 2023 02:29:08 -0800 (PST) On Mon, Nov 20, 2023 at 11:54:30PM +0000, Mark Brown wrote: > Unlike with the normal stack there is no API for configuring the the shadow > stack for a new thread, instead the kernel will dynamically allocate a new > shadow stack with the same size as the normal stack. This appears to be due > to the shadow stack series having been in development since before the more > extensible clone3() was added rather than anything more deliberate. > > Add a parameter to clone3() specifying the size of a shadow stack for > the newly created process. If no shadow stack is specified then the > existing implicit allocation behaviour is maintained. > > If the architecture does not support shadow stacks the shadow stack size > parameter must be zero, architectures that do support the feature are > expected to enforce the same requirement on individual systems that lack > shadow stack support. > > Update the existing x86 implementation to pay attention to the newly added > arguments, in order to maintain compatibility we use the existing behaviour > if no shadow stack is specified. Minimal validation is done of the supplied > parameters, detailed enforcement is left to when the thread is executed. > Since we are now using more fields from the kernel_clone_args we pass that > into the shadow stack code rather than individual fields. > > Signed-off-by: Mark Brown > --- > arch/x86/include/asm/shstk.h | 11 ++++++--- > arch/x86/kernel/process.c | 2 +- > arch/x86/kernel/shstk.c | 59 ++++++++++++++++++++++++++++++-------------- > include/linux/sched/task.h | 1 + > include/uapi/linux/sched.h | 4 +++ > kernel/fork.c | 22 +++++++++++++++-- > 6 files changed, 74 insertions(+), 25 deletions(-) > > diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h > index 42fee8959df7..8be7b0a909c3 100644 > --- a/arch/x86/include/asm/shstk.h > +++ b/arch/x86/include/asm/shstk.h > @@ -6,6 +6,7 @@ > #include > > struct task_struct; > +struct kernel_clone_args; > struct ksignal; > > #ifdef CONFIG_X86_USER_SHADOW_STACK > @@ -16,8 +17,8 @@ struct thread_shstk { > > long shstk_prctl(struct task_struct *task, int option, unsigned long arg2); > void reset_thread_features(void); > -unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, > - unsigned long stack_size); > +unsigned long shstk_alloc_thread_stack(struct task_struct *p, > + const struct kernel_clone_args *args); > void shstk_free(struct task_struct *p); > int setup_signal_shadow_stack(struct ksignal *ksig); > int restore_signal_shadow_stack(void); > @@ -26,8 +27,10 @@ static inline long shstk_prctl(struct task_struct *task, int option, > unsigned long arg2) { return -EINVAL; } > static inline void reset_thread_features(void) {} > static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p, > - unsigned long clone_flags, > - unsigned long stack_size) { return 0; } > + const struct kernel_clone_args *args) > +{ > + return 0; > +} > static inline void shstk_free(struct task_struct *p) {} > static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; } > static inline int restore_signal_shadow_stack(void) { return 0; } > diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c > index b6f4e8399fca..a9ca80ea5056 100644 > --- a/arch/x86/kernel/process.c > +++ b/arch/x86/kernel/process.c > @@ -207,7 +207,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) > * is disabled, new_ssp will remain 0, and fpu_clone() will know not to > * update it. > */ > - new_ssp = shstk_alloc_thread_stack(p, clone_flags, args->stack_size); > + new_ssp = shstk_alloc_thread_stack(p, args); > if (IS_ERR_VALUE(new_ssp)) > return PTR_ERR((void *)new_ssp); > > diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c > index 59e15dd8d0f8..a14f47d70dfb 100644 > --- a/arch/x86/kernel/shstk.c > +++ b/arch/x86/kernel/shstk.c > @@ -191,38 +191,61 @@ void reset_thread_features(void) > current->thread.features_locked = 0; > } > > -unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags, > - unsigned long stack_size) > +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, > + const struct kernel_clone_args *args) > { > struct thread_shstk *shstk = &tsk->thread.shstk; > + unsigned long clone_flags = args->flags; > unsigned long addr, size; > > /* > * If shadow stack is not enabled on the new thread, skip any > - * switch to a new shadow stack. > + * implicit switch to a new shadow stack and reject attempts to > + * explciitly specify one. > */ > - if (!features_enabled(ARCH_SHSTK_SHSTK)) > - return 0; > + if (!features_enabled(ARCH_SHSTK_SHSTK)) { > + if (args->shadow_stack_size) > + return (unsigned long)ERR_PTR(-EINVAL); > > - /* > - * For CLONE_VFORK the child will share the parents shadow stack. > - * Make sure to clear the internal tracking of the thread shadow > - * stack so the freeing logic run for child knows to leave it alone. > - */ > - if (clone_flags & CLONE_VFORK) { > - shstk->base = 0; > - shstk->size = 0; > return 0; > } > > /* > - * For !CLONE_VM the child will use a copy of the parents shadow > - * stack. > + * If the user specified a shadow stack then do some basic > + * validation and use it, otherwise fall back to a default > + * shadow stack size if the clone_flags don't indicate an > + * allocation is unneeded. > */ > - if (!(clone_flags & CLONE_VM)) > - return 0; > + if (args->shadow_stack_size) { > + size = args->shadow_stack_size; > + > + if (size < 8) > + return (unsigned long)ERR_PTR(-EINVAL); It would probably be useful to add a #define SHADOW_STACK_SIZE_MIN 8 instead of a raw number here. Any reasonably maximum that should be assumed here? IOW, what happens if userspace starts specifying 4G shadow_stack_size with each clone3() call for lolz? And I think we should move the shadow_stack_size validation into clone3_shadow_stack_valid() instead of having each architecture do it's own thing in their own handler. IOW, share as much common code as possible. Another reason to wait for that arm support to land... > + } else { > + /* > + * For CLONE_VFORK the child will share the parents > + * shadow stack. Make sure to clear the internal > + * tracking of the thread shadow stack so the freeing > + * logic run for child knows to leave it alone. > + */ > + if (clone_flags & CLONE_VFORK) { > + shstk->base = 0; > + shstk->size = 0; > + return 0; > + } Why is the CLONE_VFORK handling only necessary if shadow_stack_size is unset? In general, a comment or explanation on the interaction between CLONE_VFORK and shadow_stack_size would be helpful. > + > + /* > + * For !CLONE_VM the child will use a copy of the > + * parents shadow stack. > + */ > + if (!(clone_flags & CLONE_VM)) > + return 0; > + > + size = args->stack_size; > + > + } > > - size = adjust_shstk_size(stack_size); > + size = adjust_shstk_size(size); > addr = alloc_shstk(0, size, 0, false); > if (IS_ERR_VALUE(addr)) > return addr; > diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h > index a23af225c898..e86a09cfccd8 100644 > --- a/include/linux/sched/task.h > +++ b/include/linux/sched/task.h > @@ -41,6 +41,7 @@ struct kernel_clone_args { > void *fn_arg; > struct cgroup *cgrp; > struct css_set *cset; > + unsigned long shadow_stack_size; > }; > > /* > diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h > index 3bac0a8ceab2..a998b6d0c897 100644 > --- a/include/uapi/linux/sched.h > +++ b/include/uapi/linux/sched.h > @@ -84,6 +84,8 @@ > * kernel's limit of nested PID namespaces. > * @cgroup: If CLONE_INTO_CGROUP is specified set this to > * a file descriptor for the cgroup. > + * @shadow_stack_size: Specify the size of the shadow stack to allocate > + * for the child process. > * > * The structure is versioned by size and thus extensible. > * New struct members must go at the end of the struct and > @@ -101,12 +103,14 @@ struct clone_args { > __aligned_u64 set_tid; > __aligned_u64 set_tid_size; > __aligned_u64 cgroup; > + __aligned_u64 shadow_stack_size; > }; > #endif > > #define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */ > #define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */ > #define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ > +#define CLONE_ARGS_SIZE_VER3 96 /* sizeof fourth published struct */ > > /* > * Scheduling policies > diff --git a/kernel/fork.c b/kernel/fork.c > index 10917c3e1f03..b8ca8194bca5 100644 > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -3067,7 +3067,9 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, > CLONE_ARGS_SIZE_VER1); > BUILD_BUG_ON(offsetofend(struct clone_args, cgroup) != > CLONE_ARGS_SIZE_VER2); > - BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER2); > + BUILD_BUG_ON(offsetofend(struct clone_args, shadow_stack_size) != > + CLONE_ARGS_SIZE_VER3); > + BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER3); > > if (unlikely(usize > PAGE_SIZE)) > return -E2BIG; > @@ -3110,6 +3112,7 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, > .tls = args.tls, > .set_tid_size = args.set_tid_size, > .cgroup = args.cgroup, > + .shadow_stack_size = args.shadow_stack_size, Mild personal ocd: Can you keep the all aligned, please? > }; > > if (args.set_tid && > @@ -3150,6 +3153,21 @@ static inline bool clone3_stack_valid(struct kernel_clone_args *kargs) > return true; > } > > +/** > + * clone3_shadow_stack_valid - check and prepare shadow stack > + * @kargs: kernel clone args > + * > + * Verify that shadow stacks are only enabled if supported. > + */ > +static inline bool clone3_shadow_stack_valid(struct kernel_clone_args *kargs) > +{ > + if (!kargs->shadow_stack_size) > + return true; > + > + /* The architecture must check support on the specific machine */ > + return IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK); > +} > + > static bool clone3_args_valid(struct kernel_clone_args *kargs) > { > /* Verify that no unknown flags are passed along. */ > @@ -3172,7 +3190,7 @@ static bool clone3_args_valid(struct kernel_clone_args *kargs) > kargs->exit_signal) > return false; > > - if (!clone3_stack_valid(kargs)) > + if (!clone3_stack_valid(kargs) || !clone3_shadow_stack_valid(kargs)) > return false; > > return true; > > -- > 2.30.2 >