Received: by 2002:ab2:6991:0:b0:1f7:f6c3:9cb1 with SMTP id v17csp382558lqo; Wed, 8 May 2024 02:44:05 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWamUMf5EcMpk2kA+QUXg+CZzydYIpUFFWzeKkRPc3zAq4y0FyuUdOO9YIy3IbsqyZ+JqMgygPu/m+ZdF17SJhd5CzsyxL+7OV/1omh6A== X-Google-Smtp-Source: AGHT+IHS+4/Rd35WFQSK3QhLSQqs1DaYR8WTkoRZEPT8GwmyOncZVmrfdtt4r8U6qSnKXD30J+0f X-Received: by 2002:ad4:5bab:0:b0:6a0:c8ac:de5 with SMTP id 6a1803df08f44-6a151460ab1mr23019126d6.32.1715161445429; Wed, 08 May 2024 02:44:05 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715161445; cv=pass; d=google.com; s=arc-20160816; b=P7SYmJv+LNN+15HZymQhVT9GaNBOGmpFlCcKcreXqCA9hlC57BIwsCvnunjmUrOm5D tqr2uj1dNgtsyxz5RKG4yMAFqnTH7hIxdzUbpaXpBKkjzp0KGciykQ5GHwpPQmDN21uD Z8UnuVKyEP4bsBvUq+ND0qu1x1mCGxKmd5GvKf1PQDUrTZegAK23TyJY5dCQGXaiIMtt iBnAXN6pPmAUVeVQie/LgU0VgbZ6OAucWkOKJZoC2c+SeQE38T1iXtcG2/OlgRVEZ8ta KBQPhGKWopZRUT8zxZ+89Ie4ffZUJdzeIMVLM6RUF3rzLeIv0+4Gwk3G+2x2f4GRdLsu SJgQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=gM3sWNmCdJfQv9GynYEHq3kYQ+2v5huDyeocCzNrud4=; fh=TXNGJ9QjW/ocz/CofW5qQ7z3j/OGjvvDF7RWrWCUYFE=; b=qRGMMOgdQUHLuS+F8h0kAVBjMc4IBOPBpGjYiox15GeE3oNXPjYWbNUKraiTMYBiyv szeUjsvlsJCjuJ7RCKNjqjnkFzJkZrv9UfLqbsh6jGD/ioU88fP+gY0XUxYnrrKnaIxs bymUsyGJf7hcm9bnEhbX5OjDahaO0l2pTy5J5VTbzRqm75/YwEibtpCdhZh3DiN/D+aM LVcKuNGSR5sWZD99FCWSsP1XkQY5hwE0wmS4SAFmu6QZqJfZ8ptbchwWQ6y+m8YneoQ4 UUnvVkeSX1SKRicYJNVEeaOMZXy8YIObiuIV2ZELMrMPbiVJ/p7LxI37ZtMZECedTpsV f9NA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-172986-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-172986-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id 9-20020a0562140d0900b006a0d5990b99si12794859qvh.122.2024.05.08.02.44.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 May 2024 02:44:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-172986-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-172986-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-172986-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id C70771C2177F for ; Wed, 8 May 2024 09:44:04 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8C31A762D7; Wed, 8 May 2024 09:43:53 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 32A36763EE for ; Wed, 8 May 2024 09:43:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715161433; cv=none; b=KXzWxZgmXTmeNrtH5zQpV87TLtFLQ1Xmk1wxmSjR7IEAKsMtNn1W562eJCbhHVoAwqlrvc4Pw+bBLg77L8wNZqeu1t9iPFt1e1Yd5xzofxYxARLum5KZUpAYk0A7Baxssl2DUShI9kTJPVXZYe78i1OtyDZTSbhsYMfQxuUqc/s= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715161433; c=relaxed/simple; bh=G0QrYEbGF42FSBqhiwAk8mh4Z+JPS6Ht2fHUStLzhqU=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=u9h3LbKgiIbEqyFp42J2H/JB5dJO+NeO+PS3xfFIQGRMXDJls0hpqHKJT7bZHfb5KSE/3PrwSejFwRf0RT4RX/usWOGGFFv3KukmsY7EXRRumIVI4vuUXt0OL9ta1pRdHhpAK0muocU+NF8Tzu421JHtU6QFe/fc537g3AM6e44= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 282E11063; Wed, 8 May 2024 02:44:16 -0700 (PDT) Received: from [10.163.34.220] (unknown [10.163.34.220]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5A5533F6A8; Wed, 8 May 2024 02:43:45 -0700 (PDT) Message-ID: Date: Wed, 8 May 2024 15:13:48 +0530 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 1/4] arm64/mm: generalize PMD_PRESENT_INVALID for all levels Content-Language: en-US To: Ryan Roberts , Catalin Marinas , Will Deacon , Joey Gouly , Ard Biesheuvel , Mark Rutland , David Hildenbrand , Peter Xu , Mike Rapoport , Shivansh Vij Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20240503144604.151095-1-ryan.roberts@arm.com> <20240503144604.151095-2-ryan.roberts@arm.com> From: Anshuman Khandual In-Reply-To: <20240503144604.151095-2-ryan.roberts@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 5/3/24 20:15, Ryan Roberts wrote: > As preparation for the next patch, which frees up the PTE_PROT_NONE > present pte and swap pte bit, generalize PMD_PRESENT_INVALID to > PTE_PRESENT_INVALID. This will then be used to mark PROT_NONE ptes (and > entries at any other level) in the next patch. > > While we're at it, fix up the swap pte format comment to include > PTE_PRESENT_INVALID. This is not new, it just wasn't previously > documented. > > Reviewed-by: Catalin Marinas > Signed-off-by: Ryan Roberts Reviewed-by: Anshuman Khandual > --- > arch/arm64/include/asm/pgtable-prot.h | 8 ++++---- > arch/arm64/include/asm/pgtable.h | 21 ++++++++++++--------- > 2 files changed, 16 insertions(+), 13 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h > index dd9ee67d1d87..cdbf51eef7a6 100644 > --- a/arch/arm64/include/asm/pgtable-prot.h > +++ b/arch/arm64/include/asm/pgtable-prot.h > @@ -21,11 +21,11 @@ > #define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ > > /* > - * This bit indicates that the entry is present i.e. pmd_page() > - * still points to a valid huge page in memory even if the pmd > - * has been invalidated. > + * PTE_PRESENT_INVALID=1 & PTE_VALID=0 indicates that the pte's fields should be > + * interpreted according to the HW layout by SW but any attempted HW access to > + * the address will result in a fault. pte_present() returns true. > */ > -#define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PMD_SECT_VALID */ > +#define PTE_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PTE_VALID */ > > #define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) > #define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index afdd56d26ad7..7156c940ac4f 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -132,6 +132,8 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys) > #define pte_dirty(pte) (pte_sw_dirty(pte) || pte_hw_dirty(pte)) > > #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID)) > +#define pte_present_invalid(pte) \ > + ((pte_val(pte) & (PTE_VALID | PTE_PRESENT_INVALID)) == PTE_PRESENT_INVALID) > /* > * Execute-only user mappings do not have the PTE_USER bit set. All valid > * kernel mappings have the PTE_UXN bit set. > @@ -261,6 +263,13 @@ static inline pte_t pte_mkpresent(pte_t pte) > return set_pte_bit(pte, __pgprot(PTE_VALID)); > } > > +static inline pte_t pte_mkinvalid(pte_t pte) > +{ > + pte = set_pte_bit(pte, __pgprot(PTE_PRESENT_INVALID)); > + pte = clear_pte_bit(pte, __pgprot(PTE_VALID)); > + return pte; > +} > + > static inline pmd_t pmd_mkcont(pmd_t pmd) > { > return __pmd(pmd_val(pmd) | PMD_SECT_CONT); > @@ -478,7 +487,7 @@ static inline int pmd_protnone(pmd_t pmd) > } > #endif > > -#define pmd_present_invalid(pmd) (!!(pmd_val(pmd) & PMD_PRESENT_INVALID)) > +#define pmd_present_invalid(pmd) pte_present_invalid(pmd_pte(pmd)) > > static inline int pmd_present(pmd_t pmd) > { > @@ -508,14 +517,7 @@ static inline int pmd_trans_huge(pmd_t pmd) > #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) > #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) > #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) > - > -static inline pmd_t pmd_mkinvalid(pmd_t pmd) > -{ > - pmd = set_pmd_bit(pmd, __pgprot(PMD_PRESENT_INVALID)); > - pmd = clear_pmd_bit(pmd, __pgprot(PMD_SECT_VALID)); > - > - return pmd; > -} > +#define pmd_mkinvalid(pmd) pte_pmd(pte_mkinvalid(pmd_pte(pmd))) > > #define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) > > @@ -1251,6 +1253,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, > * bits 3-7: swap type > * bits 8-57: swap offset > * bit 58: PTE_PROT_NONE (must be zero) > + * bit 59: PTE_PRESENT_INVALID (must be zero) > */ > #define __SWP_TYPE_SHIFT 3 > #define __SWP_TYPE_BITS 5