Received: by 2002:ab2:b82:0:b0:1f3:401:3cfb with SMTP id 2csp868528lqh; Thu, 28 Mar 2024 21:49:20 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWsIzl0fvGEyn6XnHkoDnAvL9JUlISx2cMnqjvO/b+62CRaSMdwrdktSR/K20bT3yA7PcA5WIyjJZqVJhVdeDKQ49rFy7gczeBeuulHfg== X-Google-Smtp-Source: AGHT+IG09Za23oqaAyvM5X8QRu6ZntQQ3ocHD7HuSAzZtU+t/FQg33c+5wyVnEFTeyWiqyDeAUnO X-Received: by 2002:a05:6402:2809:b0:56c:16c9:bd2e with SMTP id h9-20020a056402280900b0056c16c9bd2emr883349ede.9.1711687760163; Thu, 28 Mar 2024 21:49:20 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711687760; cv=pass; d=google.com; s=arc-20160816; b=z3n00OhQ4ZNN0ZGBdCsPys3mL68ZaqGt5WWjfc/F8IEVfuOdwhWqYpA8aS+K/QzKPv 5N82OCUt2xWaMiCkJptwrErU6HcekoK7Od9iCm/XrN8tJ6AGUgnWHCeg2fKUMm2n0M+A AzQutofEW9Lz3FeyJQV8D/tI+2b7IKm0IXKr5fKCHNfhVzhE09s2XdgxJX7NX6VUDUZL 9Hyn/q60JQFXvn1FqUo2oJpyY5ZFqvsx+D3AcOFGsTxvdDs04qlzTpniFIb1vrPJRnLG BfIP//XW2RZYtL7shQLpEPd6NvS/BgStb64P/8+Vo9jnupHUJlOXceyvsU3gv2ElHKMh UQ/w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=vWiplIlW31qhAH9G0EN+9lbe2UL6EQeHM69L+wLLGcE=; fh=+U32BfMlHGzu+RYNOYJQzMm4/XqYCPR2mxHVVPap9rE=; b=uwwzdjHFRaaUjkivSPV5h+du3kypOTqyZjWovOaZ36YzNobQsoRcCSFCvd9vUTAjZC a5K/KBrJum8+9BpQhX/OO2M0kA8iwOKupkjY/1FN6jIhnX1lAzhUTAJGhEkHe5C0v4xa jg8VqbONuaLKrsTDbT9jC6JZGDtEPO0xgoYes/dewkWPkfgMYYxRtvgPiOyygeqtPDGy 9djYgvHYqCOy2bc0/UBL4QCMvc5UcPhsEQkBwVrdHpE5t/nUkQYixZ0H5JuzK0Us7HGF Mx6nITFGYKtTRehqaUNL/RvxS0StdNGT4rwVZr73dm1I9X0Xw6BTa1YF68qtiJc2x7dW 7UxQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b="Xran6/H7"; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-124131-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-124131-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 19-20020a508753000000b0056c0cc477ecsi1386963edv.146.2024.03.28.21.49.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 21:49:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-124131-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b="Xran6/H7"; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-124131-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-124131-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B5E9D1F21865 for ; Fri, 29 Mar 2024 04:49:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C18384AEFE; Fri, 29 Mar 2024 04:46:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="Xran6/H7" Received: from mail-ot1-f53.google.com (mail-ot1-f53.google.com [209.85.210.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 377A747F63 for ; Fri, 29 Mar 2024 04:46:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.53 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711687594; cv=none; b=NIBV92zQziLNZ5/iQ0v1BPIQ+0pU3jwWRJvYds6aUS0kj3yhd/BM0+/SAsOeQ1UYBPHwYCSr0UjnMnYB4/2SdRWrnB6X/NCAf4bgDTKMmXdJzCmIWEVTGMIdze+wx4f7qcF/ZKGEUTYi54k2C/dxG6n5NFg70ry0pwv1b+E0YqE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711687594; c=relaxed/simple; bh=zXTnxyxO0ctOFR9gaQSVUW0dd5k2RT6RVsNcVPVdqq8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aby1eJey/FHNnhOCuSTUN2g30vn53mbk2EZ/hecElmOTi/Ya22ny/8DG7uwzhRDs6BrIz7VI/yy0w1WrcibmJ/HBrMHlh/11aujsU2LReQwGiLKUK2UAwxU2TrPnfIJAIzT4fzLTuc9pR60tVwp4AUqX1Ap7ZOHgwE5cd8AXr08= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=Xran6/H7; arc=none smtp.client-ip=209.85.210.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Received: by mail-ot1-f53.google.com with SMTP id 46e09a7af769-6e675db6fbaso981332a34.1 for ; Thu, 28 Mar 2024 21:46:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1711687591; x=1712292391; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vWiplIlW31qhAH9G0EN+9lbe2UL6EQeHM69L+wLLGcE=; b=Xran6/H73PTBCLLSUFuX7k1EhTAMff8a0Kqz4nzwR48blxCTz2YUNAcaklVt/+UXJT wMebZ0l0Pf4yLTu0LGTjtP1RuoISH36IxWC5cGpBpi/w1a/jSnkOIyQ25b+ROuH9YZnk aZhcxMZYZJPcbxhUZzm7ZpmCXF8KgXQvPK1/ClzxFYki1qDdoJVf6IxypcANiukvO+5Q fNU0AmXwfdc3SFJPknj4BxeQLxPS0+RWb9NRYXlCechKa/9B3iLr9xNTC0AQveGgl216 4prZ6IIuIHpdpOSV9/Qhi7L+7luxa9tCdYtpZ4GEK0lZfcVh6iBbwKZkEjrL4q0GWVBq fiAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711687591; x=1712292391; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vWiplIlW31qhAH9G0EN+9lbe2UL6EQeHM69L+wLLGcE=; b=KVCz/JPtiokfb2GXOrSp2MrIaBXf9Fgbs3vA6XzyZ6cWJLuJ81KV495hPhvnSKI30I 7nA78kunmer2dc+HR/dvvr/5dfe7ipPOkcyYUikeQAeRy+XwvKgWGoYlzpeiLZNOYToi Kls5jw+CEENgtC+SdcYDKBlb3Erdn5dHfwlKpn68y+A1GpZVa/iP+Sjr9Zeo1016RjLJ qqcUkFyiXIyauFSm69HSeN+Yvdlcov2CmarDxgG768X3ybn4kGQnfsfElM2MXY8MdfII otVQW4CTA0KbVOUqW5tvWypN17bJ2PD1RauplsNGMHElica0EfxvjTGLw+LITiWk6Fs2 4ghg== X-Forwarded-Encrypted: i=1; AJvYcCXNmgnJVDMeeRyGRyf4PLlLcfK2NqkVvVyhVq1dx+z0R06gYDoi5jgAkRGPV94OlcGnfS9urMfCOCSCBBGLtbTCXTPV+ndVeQmzS+0S X-Gm-Message-State: AOJu0YzjpHIH+X12kd6q75e0OQ8qWQ/IX5lUw5/dhiefh1gj62AhSVpA +e42bTjjVEI4XOGJvDcITxMm8k8uGbvBaDXUOEGy9o5G3ouOlse8vjG5+egt0Po= X-Received: by 2002:a05:6871:76e5:b0:220:e608:89c with SMTP id od37-20020a05687176e500b00220e608089cmr680595oac.28.1711687591158; Thu, 28 Mar 2024 21:46:31 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id i18-20020aa78b52000000b006ea7e972947sm2217120pfd.130.2024.03.28.21.46.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 21:46:30 -0700 (PDT) From: Deepak Gupta To: paul.walmsley@sifive.com, rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, alex@ghiti.fr, bjorn@rivosinc.com, alexghiti@rivosinc.com, samuel.holland@sifive.com, palmer@sifive.com, conor@kernel.org, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org Cc: corbet@lwn.net, tech-j-ext@lists.risc-v.org, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, shuah@kernel.org, brauner@kernel.org, debug@rivosinc.com, andy.chiu@sifive.com, jerry.shih@sifive.com, hankuan.chen@sifive.com, greentime.hu@sifive.com, evan@rivosinc.com, xiao.w.wang@intel.com, charlie@rivosinc.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, dbarboza@ventanamicro.com, sameo@rivosinc.com, shikemeng@huaweicloud.com, willy@infradead.org, vincent.chen@sifive.com, guoren@kernel.org, samitolvanen@google.com, songshuaishuai@tinylab.org, gerg@kernel.org, heiko@sntech.de, bhe@redhat.com, jeeheng.sia@starfivetech.com, cyy@cyyself.name, maskray@google.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bgray@linux.ibm.com, mpe@ellerman.id.au, baruch@tkos.co.il, alx@kernel.org, david@redhat.com, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, shr@devkernel.io, deller@gmx.de, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com Subject: [PATCH v2 14/27] riscv/shstk: If needed allocate a new shadow stack on clone Date: Thu, 28 Mar 2024 21:44:46 -0700 Message-Id: <20240329044459.3990638-15-debug@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240329044459.3990638-1-debug@rivosinc.com> References: <20240329044459.3990638-1-debug@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Userspace specifies VM_CLONE to share address space and spawn new thread. `clone` allow userspace to specify a new stack for new thread. However there is no way to specify new shadow stack base address without changing API. This patch allocates a new shadow stack whenever VM_CLONE is given. In case of VM_FORK, parent is suspended until child finishes and thus can child use parent shadow stack. In case of !VM_CLONE, COW kicks in because entire address space is copied from parent to child. `clone3` is extensible and can provide mechanisms using which shadow stack as an input parameter can be provided. This is not settled yet and being extensively discussed on mailing list. Once that's settled, this commit will adapt to that. Signed-off-by: Deepak Gupta --- arch/riscv/include/asm/usercfi.h | 39 ++++++++++ arch/riscv/kernel/process.c | 12 +++ arch/riscv/kernel/usercfi.c | 121 +++++++++++++++++++++++++++++++ 3 files changed, 172 insertions(+) diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h index 4fa201b4fc4e..b47574a7a8c9 100644 --- a/arch/riscv/include/asm/usercfi.h +++ b/arch/riscv/include/asm/usercfi.h @@ -8,6 +8,9 @@ #ifndef __ASSEMBLY__ #include +struct task_struct; +struct kernel_clone_args; + #ifdef CONFIG_RISCV_USER_CFI struct cfi_status { unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ @@ -17,6 +20,42 @@ struct cfi_status { unsigned long shdw_stk_size; /* size of shadow stack */ }; +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args); +void shstk_release(struct task_struct *tsk); +void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned long size); +void set_active_shstk(struct task_struct *task, unsigned long shstk_addr); +bool is_shstk_enabled(struct task_struct *task); + +#else + +static inline unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) +{ + return 0; +} + +static inline void shstk_release(struct task_struct *tsk) +{ + +} + +static inline void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, + unsigned long size) +{ + +} + +static inline void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) +{ + +} + +static inline bool is_shstk_enabled(struct task_struct *task) +{ + return false; +} + #endif /* CONFIG_RISCV_USER_CFI */ #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index d864eef5a10d..9551017d16db 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -26,6 +26,7 @@ #include #include #include +#include register unsigned long gp_in_global __asm__("gp"); @@ -197,6 +198,9 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) void exit_thread(struct task_struct *tsk) { + if (IS_ENABLED(CONFIG_RISCV_USER_CFI)) + shstk_release(tsk); + return; } @@ -205,6 +209,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) unsigned long clone_flags = args->flags; unsigned long usp = args->stack; unsigned long tls = args->tls; + unsigned long ssp = 0; struct pt_regs *childregs = task_pt_regs(p); memset(&p->thread.s, 0, sizeof(p->thread.s)); @@ -220,11 +225,18 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) p->thread.s[0] = (unsigned long)args->fn; p->thread.s[1] = (unsigned long)args->fn_arg; } else { + /* allocate new shadow stack if needed. In case of CLONE_VM we have to */ + ssp = shstk_alloc_thread_stack(p, args); + if (IS_ERR_VALUE(ssp)) + return PTR_ERR((void *)ssp); + *childregs = *(current_pt_regs()); /* Turn off status.VS */ riscv_v_vstate_off(childregs); if (usp) /* User fork */ childregs->sp = usp; + if (ssp) /* if needed, set new ssp */ + set_active_shstk(p, ssp); if (clone_flags & CLONE_SETTLS) childregs->tp = tls; childregs->a0 = 0; /* Return value of fork() */ diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c index c4ed0d4e33d6..11ef7ab925c9 100644 --- a/arch/riscv/kernel/usercfi.c +++ b/arch/riscv/kernel/usercfi.c @@ -19,6 +19,41 @@ #define SHSTK_ENTRY_SIZE sizeof(void *) +bool is_shstk_enabled(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.ubcfi_en ? true : false; +} + +void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned long size) +{ + task->thread_info.user_cfi_state.shdw_stk_base = shstk_addr; + task->thread_info.user_cfi_state.shdw_stk_size = size; +} + +unsigned long get_shstk_base(struct task_struct *task, unsigned long *size) +{ + if (size) + *size = task->thread_info.user_cfi_state.shdw_stk_size; + return task->thread_info.user_cfi_state.shdw_stk_base; +} + +void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) +{ + task->thread_info.user_cfi_state.user_shdw_stk = shstk_addr; +} + +/* + * If size is 0, then to be compatible with regular stack we want it to be as big as + * regular stack. Else PAGE_ALIGN it and return back + */ +static unsigned long calc_shstk_size(unsigned long size) +{ + if (size) + return PAGE_ALIGN(size); + + return PAGE_ALIGN(min_t(unsigned long long, rlimit(RLIMIT_STACK), SZ_4G)); +} + /* * Writes on shadow stack can either be `sspush` or `ssamoswap`. `sspush` can happen * implicitly on current shadow stack pointed to by CSR_SSP. `ssamoswap` takes pointer to @@ -147,3 +182,89 @@ SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsi return allocate_shadow_stack(addr, aligned_size, size, set_tok); } + +/* + * This gets called during clone/clone3/fork. And is needed to allocate a shadow stack for + * cases where CLONE_VM is specified and thus a different stack is specified by user. We + * thus need a separate shadow stack too. How does separate shadow stack is specified by + * user is still being debated. Once that's settled, remove this part of the comment. + * This function simply returns 0 if shadow stack are not supported or if separate shadow + * stack allocation is not needed (like in case of !CLONE_VM) + */ +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) +{ + unsigned long addr, size; + + /* If shadow stack is not supported, return 0 */ + if (!cpu_supports_shadow_stack()) + return 0; + + /* + * If shadow stack is not enabled on the new thread, skip any + * switch to a new shadow stack. + */ + if (is_shstk_enabled(tsk)) + return 0; + + /* + * For CLONE_VFORK the child will share the parents shadow stack. + * Set base = 0 and size = 0, this is special means to track this state + * so the freeing logic run for child knows to leave it alone. + */ + if (args->flags & CLONE_VFORK) { + set_shstk_base(tsk, 0, 0); + return 0; + } + + /* + * For !CLONE_VM the child will use a copy of the parents shadow + * stack. + */ + if (!(args->flags & CLONE_VM)) + return 0; + + /* + * reaching here means, CLONE_VM was specified and thus a separate shadow + * stack is needed for new cloned thread. Note: below allocation is happening + * using current mm. + */ + size = calc_shstk_size(args->stack_size); + addr = allocate_shadow_stack(0, size, 0, false); + if (IS_ERR_VALUE(addr)) + return addr; + + set_shstk_base(tsk, addr, size); + + return addr + size; +} + +void shstk_release(struct task_struct *tsk) +{ + unsigned long base = 0, size = 0; + /* If shadow stack is not supported or not enabled, nothing to release */ + if (!cpu_supports_shadow_stack() || + !is_shstk_enabled(tsk)) + return; + + /* + * When fork() with CLONE_VM fails, the child (tsk) already has a + * shadow stack allocated, and exit_thread() calls this function to + * free it. In this case the parent (current) and the child share + * the same mm struct. Move forward only when they're same. + */ + if (!tsk->mm || tsk->mm != current->mm) + return; + + /* + * We know shadow stack is enabled but if base is NULL, then + * this task is not managing its own shadow stack (CLONE_VFORK). So + * skip freeing it. + */ + base = get_shstk_base(tsk, &size); + if (!base) + return; + + vm_munmap(base, size); + set_shstk_base(tsk, 0, 0); +} -- 2.43.2