Received: by 2002:ab2:6a05:0:b0:1f8:1780:a4ed with SMTP id w5csp2232383lqo; Mon, 13 May 2024 11:37:19 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVYgbv0PYBOBtHUbXcycJdux34rk1RHE9cclkEQHKF3670JWZY3cjWpNhsRlqtcOOftzribw11dtWVwkNTR02jkq7CzS+ovq0s8RnMSfQ== X-Google-Smtp-Source: AGHT+IE7nm2KN6hqkN7RaYwKa0yF4W/BJoHdGFjusNQz6p8xuoGVmfVHJhq5oLbsz3rFPhnRvs8f X-Received: by 2002:a25:a4e9:0:b0:de5:4cbf:a517 with SMTP id 3f1490d57ef6-dee4f1c60b3mr9938113276.10.1715625438794; Mon, 13 May 2024 11:37:18 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715625438; cv=pass; d=google.com; s=arc-20160816; b=n4h3PvjgiBkQC9JzWu/rM7LEjqr9wx2huHbq1J5Xz3P+2QtYGys56ldP3xV15Jxrku 9obBPdjWfbHziWT2VY8IVNEqEpCStU8se8tbiv3pqQ2F6aw6hQKwRkQAe5sRvPNvOQvB /ntJpqv1358Nupf/jQDebi9mOGlMQfb4v5YjqNbaZLnzVY02U7RSXauNNJdwPWwTMJgq cGhgQG4We8ExlFhQAtSnyHL9x9zkz1RUibZUtHH2Uq1jc0+O0zMQKKJymVCmPJQ3+3w1 BJknSiUJ3oj+I9wxwjWHbzltJiv5gK2fwdmaGxnFKoUJcCM2IM6/nduoGYGI5B0im2wv gSnw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=EJ8xaQQ9gfY7+cG8KujzU2LQ7mZBfQgErlO4DocitkQ=; fh=lJZYn3ql+Um8twYL7TVV4EYSGuPABzJvMqv+0VP5+3Q=; b=lLfwcPiYaDyMmzmqVUhdIOf3lKIYTLYHYCVaWYsOn9kwHGoQLaVDc5N5OhGIApqh9T XLIfjEyCEdBY3aN3ch2mTu7U6o5ZQL5J6JM9vf/0xzHqH0jsxxI+16RWk1mH4MeIkt6C cljal5gjb1ky0Wo9yd+r1G0H1llGX8f+Q590+jVNb0ILqzMrLilFG/mxUdsNk+lZtTS9 EymmEMh31bMVhH8cTqOgdcK4DG2aX74gv7Wt+aZ5Eo3gJ6kXWAotJsyTu3oyxDs7xRTT 7gtPxKdVHqwuk4KbGp8aJVAiWJabzFsrC7Tjo0W4Jg8GXc1BEG1lsSmkJ4NDrFBoG1iE /HBA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=sqiZz3wA; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-177972-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-177972-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id 6a1803df08f44-6a15f3053e3si102161176d6.568.2024.05.13.11.37.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:37:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-177972-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=sqiZz3wA; arc=pass (i=1 spf=pass spfdomain=rivosinc.com dkim=pass dkdomain=rivosinc-com.20230601.gappssmtp.com); spf=pass (google.com: domain of linux-kernel+bounces-177972-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-177972-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 829191C22B13 for ; Mon, 13 May 2024 18:37:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BA9E825755; Mon, 13 May 2024 18:37:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="sqiZz3wA" Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1ED1E3771C for ; Mon, 13 May 2024 18:36:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715625420; cv=none; b=ZnFo8MjqjdBeNxTNgK6k1r+aibFp5CAUT/pNbffb5uNesAY6rH+UebB0D3uEymEU+XZ+n1LrkhsikExvQjlXZLzrYwRI/QLljuHZa6Czk5t7yKDR8w1YDXDoTAmUSZD+JhzWqqpi4jY/VHPcLcK5yfNonh2URtx/2hRLBEJYc84= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715625420; c=relaxed/simple; bh=BvI39EAKtQpX2KnQvFJ2uFQpNAn8Ssm0ZY3/iNJ4Gpg=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=l13s+1fqPPDV0SB4YTlxiD56tuWLT/fFDQ2fYOFpb/iibf88yROeyHch41R09E1fpfAJyobf9gf4uAnExyo0SqqNqP0y+SzaqdCJYY73A+BfKQ7i8HtOmRAhN8DexpkZBHzwVN0yZaKpvwV2RuM+yenteCcszul0KPo86oy53DE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=sqiZz3wA; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-6f447260f9dso3741506b3a.0 for ; Mon, 13 May 2024 11:36:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715625418; x=1716230218; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=EJ8xaQQ9gfY7+cG8KujzU2LQ7mZBfQgErlO4DocitkQ=; b=sqiZz3wAwg5d+gK3gtKX4HKCKs2PF74FqtvCGajcnnYeWbJq3WC0abaoFWnXZIqQ0x T4dj7JPXyu4ZOAYWUZfPui7zFPbs94RPwLZH6/l4WEf8A/Gs0/7UETu3HnJm/m3FEIgN GRMJ5QS9yinp/S1ynzQ5op+NKIFHX1QCNEUnqP4FOqoUdoYOFQwy9kzgITsUo+6pWJnP Qxo1HTVi9Gc+jDT26CJN5OMYSa8P6NMEXRP06/RIcQtk8bspvf2a2kRlavPfwrCzJ42q rJnGGsLk0jnABqGWZmX6XjEdjcxY4AAU9YxL54sI36jZlUD5RFn/kkrM6Y915I7Gel51 swpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715625418; x=1716230218; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=EJ8xaQQ9gfY7+cG8KujzU2LQ7mZBfQgErlO4DocitkQ=; b=bqcQIWIqsrw0P9sRF7Sv6b63lFiWABM04rvas6IEFvXmVyZ3bA0RZ4FEL+aclQyIML JlPYxrwG80UXKb5l0raIqar9I0M5o81qa1QZEbn5o9jPF4A1tzavpNgf2Z3MdlsHKKdL /PyuqHylXopHxaIwPs+ybYn1iztzNHNbdjmRs3p8wSCfbE988kSguwHolkbZwCAQFuMy mB3INd9RbEq0Zk8MItkeLRex25HrH3aKr8x8yaXqlSOF8RYUiWsYPAllueTur+jMcPVM KuHz3eS10Lq/vT+EvAdzG3ugyu1Kd/BCTQKx064qbI8G4uK8mikU+f8EacqbCZWOrAI1 4URg== X-Forwarded-Encrypted: i=1; AJvYcCVrL4fxSTO0Ud15QvRuoU37V/e52lBmeHvelmgv6EtXjCp4H8vzHDIarHWfiufzHjSzBF5d0P6kmR3HQeJafQDJ9t9f6QT+kGBs2U+L X-Gm-Message-State: AOJu0YwKbZQ/v4GZwxWK2g8cRRf97fqzFX52+5UU75lL/w8mZdppcvhZ bhUZLGsdUDWhaSyoNHTBqzPCJmHiphNJstDSeChjdiejTIvHaWIOo71BIl9kLjE= X-Received: by 2002:a05:6a00:2406:b0:6f3:86ac:5eae with SMTP id d2e1a72fcca58-6f4e03843d1mr10090101b3a.28.1715625418280; Mon, 13 May 2024 11:36:58 -0700 (PDT) Received: from ghost ([2600:1010:b062:ae34:7efe:e26b:c29e:9a14]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-6f4d2a9d9acsm7881752b3a.90.2024.05.13.11.36.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 May 2024 11:36:57 -0700 (PDT) Date: Mon, 13 May 2024 11:36:49 -0700 From: Charlie Jenkins To: Deepak Gupta Cc: paul.walmsley@sifive.com, rick.p.edgecombe@intel.com, broonie@kernel.org, Szabolcs.Nagy@arm.com, kito.cheng@sifive.com, keescook@chromium.org, ajones@ventanamicro.com, conor.dooley@microchip.com, cleger@rivosinc.com, atishp@atishpatra.org, alex@ghiti.fr, bjorn@rivosinc.com, alexghiti@rivosinc.com, samuel.holland@sifive.com, conor@kernel.org, linux-doc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, corbet@lwn.net, palmer@dabbelt.com, aou@eecs.berkeley.edu, robh+dt@kernel.org, krzysztof.kozlowski+dt@linaro.org, oleg@redhat.com, akpm@linux-foundation.org, arnd@arndb.de, ebiederm@xmission.com, Liam.Howlett@oracle.com, vbabka@suse.cz, lstoakes@gmail.com, shuah@kernel.org, brauner@kernel.org, andy.chiu@sifive.com, jerry.shih@sifive.com, hankuan.chen@sifive.com, greentime.hu@sifive.com, evan@rivosinc.com, xiao.w.wang@intel.com, apatel@ventanamicro.com, mchitale@ventanamicro.com, dbarboza@ventanamicro.com, sameo@rivosinc.com, shikemeng@huaweicloud.com, willy@infradead.org, vincent.chen@sifive.com, guoren@kernel.org, samitolvanen@google.com, songshuaishuai@tinylab.org, gerg@kernel.org, heiko@sntech.de, bhe@redhat.com, jeeheng.sia@starfivetech.com, cyy@cyyself.name, maskray@google.com, ancientmodern4@gmail.com, mathis.salmen@matsal.de, cuiyunhui@bytedance.com, bgray@linux.ibm.com, mpe@ellerman.id.au, baruch@tkos.co.il, alx@kernel.org, david@redhat.com, catalin.marinas@arm.com, revest@chromium.org, josh@joshtriplett.org, shr@devkernel.io, deller@gmx.de, omosnace@redhat.com, ojeda@kernel.org, jhubbard@nvidia.com Subject: Re: [PATCH v3 10/29] riscv/mm : ensure PROT_WRITE leads to VM_READ | VM_WRITE Message-ID: References: <20240403234054.2020347-1-debug@rivosinc.com> <20240403234054.2020347-11-debug@rivosinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Mon, May 13, 2024 at 10:47:25AM -0700, Deepak Gupta wrote: > On Fri, May 10, 2024 at 02:02:54PM -0700, Charlie Jenkins wrote: > > On Wed, Apr 03, 2024 at 04:34:58PM -0700, Deepak Gupta wrote: > > > `arch_calc_vm_prot_bits` is implemented on risc-v to return VM_READ | > > > VM_WRITE if PROT_WRITE is specified. Similarly `riscv_sys_mmap` is > > > updated to convert all incoming PROT_WRITE to (PROT_WRITE | PROT_READ). > > > This is to make sure that any existing apps using PROT_WRITE still work. > > > > > > Earlier `protection_map[VM_WRITE]` used to pick read-write PTE encodings. > > > Now `protection_map[VM_WRITE]` will always pick PAGE_SHADOWSTACK PTE > > > encodings for shadow stack. Above changes ensure that existing apps > > > continue to work because underneath kernel will be picking > > > `protection_map[VM_WRITE|VM_READ]` PTE encodings. > > > > > > Signed-off-by: Deepak Gupta > > > --- > > > arch/riscv/include/asm/mman.h | 24 ++++++++++++++++++++++++ > > > arch/riscv/include/asm/pgtable.h | 1 + > > > arch/riscv/kernel/sys_riscv.c | 11 +++++++++++ > > > arch/riscv/mm/init.c | 2 +- > > > mm/mmap.c | 1 + > > > 5 files changed, 38 insertions(+), 1 deletion(-) > > > create mode 100644 arch/riscv/include/asm/mman.h > > > > > > diff --git a/arch/riscv/include/asm/mman.h b/arch/riscv/include/asm/mman.h > > > new file mode 100644 > > > index 000000000000..ef9fedf32546 > > > --- /dev/null > > > +++ b/arch/riscv/include/asm/mman.h > > > @@ -0,0 +1,24 @@ > > > +/* SPDX-License-Identifier: GPL-2.0 */ > > > +#ifndef __ASM_MMAN_H__ > > > +#define __ASM_MMAN_H__ > > > + > > > +#include > > > +#include > > > +#include > > > + > > > +static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, > > > + unsigned long pkey __always_unused) > > > +{ > > > + unsigned long ret = 0; > > > + > > > + /* > > > + * If PROT_WRITE was specified, force it to VM_READ | VM_WRITE. > > > + * Only VM_WRITE means shadow stack. > > > + */ > > > + if (prot & PROT_WRITE) > > > + ret = (VM_READ | VM_WRITE); > > > + return ret; > > > +} > > > +#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) > > > + > > > +#endif /* ! __ASM_MMAN_H__ */ > > > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h > > > index 6066822e7396..4d5983bc6766 100644 > > > --- a/arch/riscv/include/asm/pgtable.h > > > +++ b/arch/riscv/include/asm/pgtable.h > > > @@ -184,6 +184,7 @@ extern struct pt_alloc_ops pt_ops __initdata; > > > #define PAGE_READ_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC) > > > #define PAGE_WRITE_EXEC __pgprot(_PAGE_BASE | _PAGE_READ | \ > > > _PAGE_EXEC | _PAGE_WRITE) > > > +#define PAGE_SHADOWSTACK __pgprot(_PAGE_BASE | _PAGE_WRITE) > > > > > > #define PAGE_COPY PAGE_READ > > > #define PAGE_COPY_EXEC PAGE_READ_EXEC > > > diff --git a/arch/riscv/kernel/sys_riscv.c b/arch/riscv/kernel/sys_riscv.c > > > index f1c1416a9f1e..846c36b1b3d5 100644 > > > --- a/arch/riscv/kernel/sys_riscv.c > > > +++ b/arch/riscv/kernel/sys_riscv.c > > > @@ -8,6 +8,8 @@ > > > #include > > > #include > > > #include > > > +#include > > > +#include > > > > > > static long riscv_sys_mmap(unsigned long addr, unsigned long len, > > > unsigned long prot, unsigned long flags, > > > @@ -17,6 +19,15 @@ static long riscv_sys_mmap(unsigned long addr, unsigned long len, > > > if (unlikely(offset & (~PAGE_MASK >> page_shift_offset))) > > > return -EINVAL; > > > > > > + /* > > > + * If only PROT_WRITE is specified then extend that to PROT_READ > > > + * protection_map[VM_WRITE] is now going to select shadow stack encodings. > > > + * So specifying PROT_WRITE actually should select protection_map [VM_WRITE | VM_READ] > > > + * If user wants to create shadow stack then they should use `map_shadow_stack` syscall. > > > + */ > > > + if (unlikely((prot & PROT_WRITE) && !(prot & PROT_READ))) > > > > The comments says that this should extend to PROT_READ if only > > PROT_WRITE is specified. This condition instead is checking if > > PROT_WRITE is selected but PROT_READ is not. If prot is (VM_EXEC | > > VM_WRITE) then it would be extended to (VM_EXEC | VM_WRITE | VM_READ). > > This will not currently cause any issues because these both map to the > > same value in the protection_map PAGE_COPY_EXEC, however this seems to > > be not the intention of this change. > > > > prot == PROT_WRITE better suits the condition explained in the comment. > > If someone specifies this (PROT_EXEC | PROT_WRITE) today, it works because > of the way permissions are setup in `protection_map`. On risc-v there is no > way to have a page which is execute and write only. So expectation is that > if some apps were using `PROT_EXEC | PROT_WRITE` today, they were working > because internally it was translating to read, write and execute on page > permissions level. This patch make sure that, it stays same from page > permissions perspective. > > If someone was using PROT_EXEC, it may translate to execute only and this change > doesn't impact that. > > Patch simply looks for presence of `PROT_WRITE` and absence of `PROT_READ` in > protection flags and if that condition is satisfied, it assumes that caller assumed > page is going to be read allowed as well. The purpose of this change is for compatibility with shadow stack pages but this affects flags for pages that are not shadow stack pages. Adding PROT_READ to the other cases is redundant as protection_map already handles that mapping. Permissions being strictly PROT_WRITE is the only case that needs to be handled, and is the only case that is called out in the commit message and in the comment. - Charlie > > > > > > > + prot |= PROT_READ; > > > + > > > return ksys_mmap_pgoff(addr, len, prot, flags, fd, > > > offset >> (PAGE_SHIFT - page_shift_offset)); > > > } > > > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > > > index fa34cf55037b..98e5ece4052a 100644 > > > --- a/arch/riscv/mm/init.c > > > +++ b/arch/riscv/mm/init.c > > > @@ -299,7 +299,7 @@ pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); > > > static const pgprot_t protection_map[16] = { > > > [VM_NONE] = PAGE_NONE, > > > [VM_READ] = PAGE_READ, > > > - [VM_WRITE] = PAGE_COPY, > > > + [VM_WRITE] = PAGE_SHADOWSTACK, > > > [VM_WRITE | VM_READ] = PAGE_COPY, > > > [VM_EXEC] = PAGE_EXEC, > > > [VM_EXEC | VM_READ] = PAGE_READ_EXEC, > > > diff --git a/mm/mmap.c b/mm/mmap.c > > > index d89770eaab6b..57a974f49b00 100644 > > > --- a/mm/mmap.c > > > +++ b/mm/mmap.c > > > @@ -47,6 +47,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > > It doesn't seem like this is necessary for this patch. > > Thanks. Yeah it looks like I forgot to remove this over the churn. > Will fix it. > > > > > - Charlie > > > > > > > > #include > > > #include > > > -- > > > 2.43.2 > > >