Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp226749rdh; Thu, 23 Nov 2023 02:10:50 -0800 (PST) X-Google-Smtp-Source: AGHT+IGWnUztlnLIJDjCR0lS96nBcXMBOUJEWjJsUU4J1ixxkKR/9q1y3aXZeH0gW8CG4SJFTJb6 X-Received: by 2002:a05:6808:14cd:b0:3ab:8431:8037 with SMTP id f13-20020a05680814cd00b003ab84318037mr6870718oiw.32.1700734250605; Thu, 23 Nov 2023 02:10:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700734250; cv=none; d=google.com; s=arc-20160816; b=B0zN1+adbBp09vC8UKDolZgNVIB9oH19u7Qr8jMD1g252QqdLzT3whMjMxNVs97Qoj aH6a+UO09ymJX0WBzeVwk49zvUym9hHJEqr57DIOP00tD4LP6xRrN8pRStDBl5mdxecm kEGMeBEY//7JBsgSqkUx7RIEcyXjh2zWSBMpSxAMswtOzAFGe1b4DXn/gFneWQ2wXZeo a/D8ryH9cGANSbx2WgSpGAjxtzoiY8yEa8OHTDmrf3WNGVNXexyd8ylvdDKXztPAl7nk +iOIDWEETYbjS+tgqdv1oaXAIR1QQEkSfuLWOh4Nhh71L8RQ9fmbzF0zy6xQa5jqNW6O 8Ogw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=MTxIXJzZwwu+tK4c7gYUrEOQ0WIJQ7w6FeWxt8OT/Oo=; fh=GlYS7EKTxGzg/INLWry9ygpsjW23ihvEYvqJYZO6af8=; b=p9fwCHZLEgzP6C9lbXnYrqlrYQsLVpi1JbsKAHyNLK3qhBZteSGxZv7BY1kdXPJWZr AJaG2q2nr0PQmGkB1Ukbe96GfZJLBoynMP/ZtwDNQBz+LB1KEGmgQWzOx9VqtfNXiFGd dN6Pe5+w6J1pXJa9glJen48Psf+uj4ty6QnrGJ9KdrJBMC9WWBUIoRpKuSANtBm+UC+N RvUf2ik2XdUZpq5Fc5CAlI1nZnPxGR1FRjWOV97dOdSU+9X4VEGAYySj4NabXbdR4c1E lairdJX75VgiTqe9+yn9xoUXruThH4hlVys2Miax7IPtfThXL8Quwl7VR4Kid8r1l5Up BsRQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ezixqICh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id w66-20020a626245000000b006c2d5919b46si934322pfb.117.2023.11.23.02.10.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Nov 2023 02:10:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ezixqICh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 82EC480F9CB2; Thu, 23 Nov 2023 02:10:47 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234607AbjKWKKb (ORCPT + 99 others); Thu, 23 Nov 2023 05:10:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231793AbjKWKK3 (ORCPT ); Thu, 23 Nov 2023 05:10:29 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4006C1A4 for ; Thu, 23 Nov 2023 02:10:35 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D203C433CA; Thu, 23 Nov 2023 10:10:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700734234; bh=b+gBzFEicDYyPBdquDhxVx41hU6QkjMteum/OQQW+Wk=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ezixqIChb4fch9UNzYpEf4yWWP8O19wVCqthT5u/2TZr/2I3K6kZkK1nB0cUOaOJY srpIy0bGhWwtL67p3D7omPP/lw8veZlURC/vfX7D5vHGA+ORDOqwupu2I1H8Za6jza SHmitCZylj/pvf7PB3qvACBqbcniAgYQJbmM6K09EtBgElqg4Dd1TNNVmOwhSoCOBy XCgH42KEAG8XPjmTSAk1p+ScWnUEr0ZUCKQwozsRlumJxiYApV+JxXhBQ9vc0Cjc4N WQ8Z7VAFVFAS1HhowitHETcRUNw5FUtbZ44rRw1c9JVsgOyAnWni7XJWgSvURY56Xn Rts5oIODjIiRw== Date: Thu, 23 Nov 2023 11:10:24 +0100 From: Christian Brauner To: Mark Brown Cc: Szabolcs Nagy , "Rick P. Edgecombe" , Deepak Gupta , "H.J. Lu" , Florian Weimer , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Valentin Schneider , Shuah Khan , linux-kernel@vger.kernel.org, Catalin Marinas , Will Deacon , Kees Cook , jannh@google.com, linux-kselftest@vger.kernel.org, linux-api@vger.kernel.org, David Hildenbrand , nd@arm.com Subject: Re: [PATCH RFT v3 0/5] fork: Support shadow stacks in clone3() Message-ID: <20231123-geflattert-mausklick-63d8ebcacffb@brauner> References: <20231120-clone3-shadow-stack-v3-0-a7b8ed3e2acc@kernel.org> <20231121-urlaub-motivieren-c9d7ee1a6058@brauner> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.3 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Thu, 23 Nov 2023 02:10:47 -0800 (PST) On Tue, Nov 21, 2023 at 04:09:40PM +0000, Mark Brown wrote: > On Tue, Nov 21, 2023 at 12:21:37PM +0000, Szabolcs Nagy wrote: > > The 11/21/2023 11:17, Christian Brauner wrote: > > > > I have a few questions that are probably me just not knowing much about > > > shadow stacks so hopefully I'm not asking you write a thesis by > > > accident: > > One thing it feels like it's worth saying up front here is that shadow > stacks are userspace memory with special permissions and instructions > for access - they are mapped into the userspace address range and > userspace can directly interact with them in restricted ways. For > example there's some thought to using shadow stacks in unwinders since > all the return addresses are stored in a single convenient block of > memory which it's much harder to corrupt. Overflowing a shadow stack > results in userspace getting a memory access fault just as with other > memory access issues. Thanks for that summary. > > > > (2) With what other interfaces is implicit allocation and deallocation > > > not consistent? I don't understand this argument. The kernel creates > > > a shadow stack as a security measure to store return addresses. It > > > seems to me exactly that the kernel should implicitly allocate and > > > deallocate the shadow stack and not have userspace muck around with > > > its size? > > > the kernel is not supposed to impose stack size policy or a particular > > programming model that limits the stack management options nor prevent > > the handling of stack overflows. > > The inconsistency here is with the management of the standard stack - > with the standard stack userspace passes an already allocated address > range to the kernel. A constant tension during review of the shadow > stack interfaces has been that shadow stack memory is userspace memory > but the security constraints mean that we've come down on the side of > having a custom allocation syscall for it instead of using flags on > mmap() and friends like people often expect, and now having it allocated > as part of clone3(). The aim is to highlight that this difference is So you have two interfaces for allocating a shadow stack. The first one is to explicitly alloc a shadow stack via the map_shadow_stack(). The second one is an implicit allocation during clone3() and you want to allow explicitly influencing that. > deliberately chosen for specific reasons rather than just carelessness. > > > > (3) Why is it safe for userspace to request the shadow stack size? What > > > if they request a tiny shadow stack size? Should this interface > > > require any privilege? > > > user can allocate huge or tiny stacks already. > > > and i think userspace can take control over shadow stack management: > > it can disable signals, start a clone child with stack_size == 1 page, > > map_shadow_stack and switch to it, enable signals. however this is > > complicated, leaks 1 page of kernel allocated shadow stack (+reserved > > guard page, i guess userspace could unmap, not sure if that works > > currently) and requires additional syscalls. > > The other thing here is that if userspace gets this wrong it'll result > in the userspace process hitting the top of the stack and getting fatal > signals in a similar manner to what happens if it gets the size of > the standard stack wrong (the kernel allocation does mean that there > should always be guard pages and it's harder to overrun the stack and > corrupt adjacent memory). There doesn't seem to be any meaningful risk > here over what userspace can already do to itself anyway as part of > thread allocation. clone3() _aimed_ to cleanup the stack handling a bit but we had concerns that deviating too much from legacy clone() would mean userspace couldn't fully replace it. So we would have liked to clean up stack handling a lot more but there's limits to that. We do however perform basic sanity checks now. > > > > (4) Why isn't the @stack_size argument I added for clone3() enough? > > > If it is specified can't the size of the shadow stack derived from it? > > > shadow stack only contains return addresses so it is proportional > > to the number of stack frames, not the stack size and it must > > account for sigaltstack too, not just the thread stack. > > > if you make minimal assumptions about stack usage and ignore the > > sigaltstack issue then the worst case shadow stack requirement > > is indeed proportional to the stack_size, but this upper bound > > can be pessimistic and userspace knows the tradeoffs better. > > It's also worth pointing out here that the existing shadow stack support > for x86 and in review code for arm64 make exactly these assumptions and > guesses at a shadow stack size based on the stack_size for the thread. Ok. > There's just been a general lack of enthusiasm for the fact that due to > the need to store variables on the normal stack the resulting shadow > stack is very likely to be substantially overallocated but we can't > safely reduce the size without information from userspace. Ok. > > > > And my current main objection is that shadow stacks were just released > > > to userspace. There can't be a massive amount of users yet - outside of > > > maybe early adopters. > > > no upstream libc has code to enable shadow stacks at this point > > so there are exactly 0 users in the open. (this feature requires > > runtime support) > > > the change is expected to allow wider deployability. (e.g. not > > just in glibc) > > Right, and the lack of any userspace control of the shadow stack size > has been a review concern with the arm64 GCS series which I'm trying to > address here. The main concern is that userspaces that start a lot of > threads are going to start using a lot more address space than they need > to when shadow stacks are enabled. Given the fairly long deployment > pipeline from extending a syscall to end users who might be using the > feature in conjuction with imposing resource limits it does seem like a > reasonable problem to anticipate. Ok, I can see that argument. > > > > The fact that there are other architectures that bring in a similar > > > feature makes me even more hesitant. If they have all agreed _and_ > > > implemented shadow stacks and have unified semantics then we can > > > consider exposing control knobs to userspace that aren't implicitly > > > architecture specific currently. > > To be clear the reason I'm working on this is that I've implemented the > arm64 support, I don't even have any access to x86 systems that have the Yes, I'm aware. > feature (hence the RFT in the subject line) - Rick Edgecombe did the x86 > work. The arm64 code is still in review, the userspace interface is > very similar to that for x86 and there doesn't seem to be any > controversy there which makes me expect that a change is likely. Unlike > x86 we only have a spec and virtual implementations at present, there's > no immintent hardware, so we're taking our time with making sure that > everything is working well. Deepak Gupta (in CC) has been reviewing the > series from the point of view of RISC-V. I think we're all fairly well > aligned on the requirements here. I'm still not enthusiastic that we only have one implementation for this in the kernel. What's the harm in waiting until the arm patches are merged? This shouldn't result in chicken and egg: if the implementations are sufficiently similar then we can do an appropriate clone3() extension.