Received: by 2002:a89:48b:0:b0:1f5:f2ab:c469 with SMTP id a11csp339024lqd; Wed, 24 Apr 2024 04:11:01 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCU2iZNi5hQHkjBcWabE6PuOBfSgdfHstV1s7DcdqBhOj8FX823NLvkdOmHTbShhdVanEhLSMNwFcICtt6jP2wISaXeiXo0n5Gk+MuMlEg== X-Google-Smtp-Source: AGHT+IHt0ZNmkAQVhjhWIk9Dd7cQLTL3s2z2JPIS4Hf4A4jnzVello1XYxFx2ktPIx4+4WiptDHY X-Received: by 2002:a05:622a:4b05:b0:437:96a9:ed with SMTP id et5-20020a05622a4b0500b0043796a900edmr2433159qtb.24.1713957061082; Wed, 24 Apr 2024 04:11:01 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1713957061; cv=pass; d=google.com; s=arc-20160816; b=WpHSpboopzmzGMk3Kr4gFpqppX7QvY+4uiazQVjqgmAN26kAqeE31nqMHHRlQbnEFV C7W/uypkDYYiYZ+p4HQwKJ0CL3wpsPqmVfyp5h3BoF9gX/L/jdbijqVNqCLmL506qAMV E4IDCmlxmup+RDjIarTw619Tkic+fX6YLgYKQaE9zCUjq6xlnONRrCZTKOyJOZmu7l/+ ZjWoMY6bWzGmGQ9ati39L8AmozgyVz62ZGHwj0O7MncGR/23WLhcxGyR0zFslfaKNXOW 5QnuTIRTsjzI7yOhlUJmOLW6Xs7KENTDRpiVhrL7Z0wh16GYgYkAdqvqepHGhdySq6GA SDiA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=spZoa5h/qUXqFMQ+/0z/V07ne8ZpayUJhRmJBzPJ4Lc=; fh=fe/psXtJsA7ttHWiPsXFqy2SSORxnVTmpkXPMijV0n8=; b=OQWRv4i46mF+ZT7ogpmjXnaSmFJIQfzSN/DtE0EZ1Jy0EMCbi95MtH1Z8LyDG3z/C8 1MdQNvccL59nfnwvbH6WuxgMKvRRws+pEi3YoY/xGqmmPR90Vd+VQ+Rl/br+O3G1fsKI d+m/M0lMgN5vLkFgl8U/MF4A1A1eAcpr9rlVxp3DTgK+yXlCFMFQtPDHA9hnKQwFHULy mxFZyfYdUF/tOP4Rbw1JIWo1zg+XSe7Kc4zQ5yzkMcsMtzZna7hY0fw9t8QO0YGFK7kr +dG/V7jkWHtYYmO0Yo60ITno+pgiYRqhFGrQnJeBZILGD+D7+xcvTEHGhhTuRiHgy3J6 Ej8Q==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-156781-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-156781-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id hd27-20020a05622a299b00b00434f050ef08si13473226qtb.539.2024.04.24.04.11.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Apr 2024 04:11:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-156781-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-156781-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-156781-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id C61061C21886 for ; Wed, 24 Apr 2024 11:11:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B752415ADA6; Wed, 24 Apr 2024 11:10:31 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 87ACF15A4B6 for ; Wed, 24 Apr 2024 11:10:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713957031; cv=none; b=Nz+mY4VS2ReMsI8sYDIzoB5Oxi6H0fm5HtZX0JjfUtPOtL7yeZtfsX41JoZn1BmvlFTFXvkQaabijgbPttQoTON6SNhoVJ5I0dKhsN8aWjGwVnFG9VjcrO3ZlWmX/14DSMpA4LXpHj4oBBBC8RFrAXUE3oNMciR2rY4gRY8Ez0o= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713957031; c=relaxed/simple; bh=akOfS8EkZRoGbWe1l0+3SMfFV85jGQ8wGNeGV5+GkcY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=k/8X4LRnW2IhoGmefXfIWMbF5UCGZBvgFfTt+TM+Vpm+40ILNMNhkIRJsArZ1C0AT1PXkACEkR+XL3jerl1Oj3qlh22KgKJ1x/JyoXGoirdX6Z2hG/75GO52dgf5Fi+CtYocRvUJgG0yakXBFqRosfURaFQ2QVr8r6B2VRbOqwQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 933D5DA7; Wed, 24 Apr 2024 04:10:56 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4E2333F73F; Wed, 24 Apr 2024 04:10:27 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Joey Gouly , Ard Biesheuvel , Mark Rutland , Anshuman Khandual , David Hildenbrand , Peter Xu , Mike Rapoport , Shivansh Vij Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 1/2] arm64/mm: Move PTE_PROT_NONE and PMD_PRESENT_INVALID Date: Wed, 24 Apr 2024 12:10:16 +0100 Message-Id: <20240424111017.3160195-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240424111017.3160195-1-ryan.roberts@arm.com> References: <20240424111017.3160195-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Previously PTE_PROT_NONE was occupying bit 58, one of the bits reserved for SW use when the PTE is valid. This is a waste of those precious SW bits since PTE_PROT_NONE can only ever be set when valid is clear. Instead let's overlay it on what would be a HW bit if valid was set. We need to be careful about which HW bit to choose since some of them must be preserved; when pte_present() is true (as it is for a PTE_PROT_NONE pte), it is legitimate for the core to call various accessors, e.g. pte_dirty(), pte_write() etc. There are also some accessors that are private to the arch which must continue to be honoured, e.g. pte_user(), pte_user_exec() etc. So we choose to overlay PTE_UXN; This effectively means that whenever a pte has PTE_PROT_NONE set, it will always report pte_user_exec() == false, which is obviously always correct. As a result of this change, we must shuffle the layout of the arch-specific swap pte so that PTE_PROT_NONE is always zero and not overlapping with any other field. As a result of this, there is no way to keep the `type` field contiguous without conflicting with PMD_PRESENT_INVALID (bit 59), which must also be 0 for a swap pte. So let's move PMD_PRESENT_INVALID to bit 60. In the end, this frees up bit 58 for future use as a proper SW bit (e.g. soft-dirty or uffd-wp). Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/pgtable-prot.h | 4 ++-- arch/arm64/include/asm/pgtable.h | 16 +++++++++------- 2 files changed, 11 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index dd9ee67d1d87..ef952d69fd04 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -18,14 +18,14 @@ #define PTE_DIRTY (_AT(pteval_t, 1) << 55) #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) #define PTE_DEVMAP (_AT(pteval_t, 1) << 57) -#define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ +#define PTE_PROT_NONE (PTE_UXN) /* Reuse PTE_UXN; only when !PTE_VALID */ /* * This bit indicates that the entry is present i.e. pmd_page() * still points to a valid huge page in memory even if the pmd * has been invalidated. */ -#define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PMD_SECT_VALID */ +#define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 60) /* only when !PMD_SECT_VALID */ #define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) #define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index afdd56d26ad7..23aabff4fa6f 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1248,20 +1248,22 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, * Encode and decode a swap entry: * bits 0-1: present (must be zero) * bits 2: remember PG_anon_exclusive - * bits 3-7: swap type - * bits 8-57: swap offset - * bit 58: PTE_PROT_NONE (must be zero) + * bits 4-53: swap offset + * bit 54: PTE_PROT_NONE (overlays PTE_UXN) (must be zero) + * bits 55-59: swap type + * bit 60: PMD_PRESENT_INVALID (must be zero) */ -#define __SWP_TYPE_SHIFT 3 +#define __SWP_TYPE_SHIFT 55 #define __SWP_TYPE_BITS 5 -#define __SWP_OFFSET_BITS 50 #define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1) -#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT) +#define __SWP_OFFSET_SHIFT 4 +#define __SWP_OFFSET_BITS 50 #define __SWP_OFFSET_MASK ((1UL << __SWP_OFFSET_BITS) - 1) #define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK) #define __swp_offset(x) (((x).val >> __SWP_OFFSET_SHIFT) & __SWP_OFFSET_MASK) -#define __swp_entry(type,offset) ((swp_entry_t) { ((type) << __SWP_TYPE_SHIFT) | ((offset) << __SWP_OFFSET_SHIFT) }) +#define __swp_entry(type, offset) ((swp_entry_t) { ((unsigned long)(type) << __SWP_TYPE_SHIFT) | \ + ((unsigned long)(offset) << __SWP_OFFSET_SHIFT) }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) -- 2.25.1