Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2433079pxj; Sun, 6 Jun 2021 02:07:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwvdCrTctA/ifMI4NpeMJbb7Pw8HAy+QOpf3fDBufN/2n0oemzjJ9XkxB9VptPH93r7/an2 X-Received: by 2002:aa7:c2c7:: with SMTP id m7mr14469683edp.156.1622970463957; Sun, 06 Jun 2021 02:07:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622970463; cv=none; d=google.com; s=arc-20160816; b=e+iCzmmGfhOfnfH7XtHNeSwicim2NCRmd+NWc0PgUC5konJUoIuntcqDIuwxodG4Mf NengjLQv3oWu8cIgvdGzmhh3rjJKFhp/SIJ8Vi7yibSHgSc5pHAVeESx00cJsl6Q/NBf 4KeHb7mbRUV64+IFITX5UiRSUY1LPtIh+XKgzVzias//VHdIVOdBXn5NlNsFJq6+BN5L IbShdNOklWI0mnM+WfMNhcNBFH+b0KFQ5wLNHYcZx44WRg6zaO6+O/DNsS+CsjF8BpBz 9Bp7MvCbHtizPUQxAZtZtf+8VnCx+0kybrHkelV+/OiNILFf2RR4HmS5CXQpjkUVUmTY LtaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=6c/6P+RBBhUXvNHsovbgS0IBUOGHtKjFjuJn588oYvk=; b=zPE9Oy7SVlMu/5UKLhrxLL+1MRaBFQtPo0AqvZzDrLrhK0cwgXwxpzdH8pVWrI8/fz d/5NP0K0G66G7wt+4k8D6IV7FIdz1+nUeuflSmtPPQ3DAgS3eGLXD76ne+Tw6hpH5Bmn 1fwQj4cW+4P8Tz6S9CvbcXLvEg4E6+TMi1VORrp+lbTktz2217CmefohOBHbZyjBDpx0 hC0k5HGgOyaoYgTf14sLv+gDZdbnTtS8Ppy/1x6M8SGx+W58i0u/rTKO3Qi8sNYqYXLm rHBpuMjJlshItHsDIjBdxYddWdo6dMtU3i0zuUF+xIEx/5Hs/uzFlxX/NBOiSc7Od/l4 PBYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QEmXI3b7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p10si9626516edy.258.2021.06.06.02.07.21; Sun, 06 Jun 2021 02:07:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QEmXI3b7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230291AbhFFJHN (ORCPT + 99 others); Sun, 6 Jun 2021 05:07:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:37902 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230272AbhFFJHN (ORCPT ); Sun, 6 Jun 2021 05:07:13 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id DDA4061422; Sun, 6 Jun 2021 09:05:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1622970323; bh=L8xajDumKH1Cg7eZ91SM48KqOMpWXfNl+mHnoCZQBpU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QEmXI3b7k0JcId3xCv4dEM2lO6p0ojmwnUagRx4hHFRbltGKWrXdKaUFGk6UUXmxD S4NRtKgdwy7NSX0ztjbCdWBpn7Be6Pp9y3lw8F0pvTkVBooLvaE4v2cLjEVweyDd5D YUellDH7CiZ6JxajGwYIdcTFqqEfYU3h/K3YOA1WVdlubGrb1n4uBs6lFaEQx23Y2J NMBRJoeEpjRtduGzFkbgbWeQTkaCrtCbrQdZ8zy8NkJP1e66SeOBto5yK2EQFFy/b/ 5NYfMeVWRalVgxRf8Nv7qdKn6e2j5OWdlzqtxNqRATxVbtptUsFQMO4tmHbstOpaKF 9Y3slo/pyf/xg== From: guoren@kernel.org To: guoren@kernel.org, anup.patel@wdc.com, palmerdabbelt@google.com, arnd@arndb.de, wens@csie.org, maxime@cerno.tech, drew@beagleboard.org, liush@allwinnertech.com, lazyparser@gmail.com, wefu@redhat.com Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-sunxi@lists.linux.dev, Guo Ren , Christoph Hellwig Subject: [RFC PATCH v2 04/11] riscv: pgtable: Fixup _PAGE_CHG_MASK usage Date: Sun, 6 Jun 2021 09:04:02 +0000 Message-Id: <1622970249-50770-8-git-send-email-guoren@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1622970249-50770-1-git-send-email-guoren@kernel.org> References: <1622970249-50770-1-git-send-email-guoren@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Guo Ren We should masks all attributes BITS first, and then using '>> _PAGE_PFN_SHIFT' to get the final PFN value. Adding '& _PAGE_CHG_MASK' makes the code semantics more accurate. Signed-off-by: Guo Ren Signed-off-by: Liu Shaohua Cc: Anup Patel Cc: Arnd Bergmann Cc: Chen-Yu Tsai Cc: Christoph Hellwig Cc: Drew Fustini Cc: Maxime Ripard Cc: Palmer Dabbelt Cc: Wei Fu Cc: Wei Wu --- arch/riscv/include/asm/pgtable-64.h | 8 +++++--- arch/riscv/include/asm/pgtable.h | 6 +++--- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/riscv/include/asm/pgtable-64.h b/arch/riscv/include/asm/pgtable-64.h index f3b0da6..cbf9acf 100644 --- a/arch/riscv/include/asm/pgtable-64.h +++ b/arch/riscv/include/asm/pgtable-64.h @@ -62,12 +62,14 @@ static inline void pud_clear(pud_t *pudp) static inline unsigned long pud_page_vaddr(pud_t pud) { - return (unsigned long)pfn_to_virt(pud_val(pud) >> _PAGE_PFN_SHIFT); + return (unsigned long)pfn_to_virt( + (pud_val(pud) & _PAGE_CHG_MASK) >> _PAGE_PFN_SHIFT); } static inline struct page *pud_page(pud_t pud) { - return pfn_to_page(pud_val(pud) >> _PAGE_PFN_SHIFT); + return pfn_to_page( + (pud_val(pud) & _PAGE_CHG_MASK) >> _PAGE_PFN_SHIFT); } static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) @@ -77,7 +79,7 @@ static inline pmd_t pfn_pmd(unsigned long pfn, pgprot_t prot) static inline unsigned long _pmd_pfn(pmd_t pmd) { - return pmd_val(pmd) >> _PAGE_PFN_SHIFT; + return (pmd_val(pmd) & _PAGE_CHG_MASK) >> _PAGE_PFN_SHIFT; } #define pmd_ERROR(e) \ diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 346a3c6..13a79643 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -217,12 +217,12 @@ static inline unsigned long _pgd_pfn(pgd_t pgd) static inline struct page *pmd_page(pmd_t pmd) { - return pfn_to_page(pmd_val(pmd) >> _PAGE_PFN_SHIFT); + return pfn_to_page((pmd_val(pmd) & _PAGE_CHG_MASK) >> _PAGE_PFN_SHIFT); } static inline unsigned long pmd_page_vaddr(pmd_t pmd) { - return (unsigned long)pfn_to_virt(pmd_val(pmd) >> _PAGE_PFN_SHIFT); + return (unsigned long)pfn_to_virt((pmd_val(pmd) & _PAGE_CHG_MASK) >> _PAGE_PFN_SHIFT); } static inline pte_t pmd_pte(pmd_t pmd) @@ -233,7 +233,7 @@ static inline pte_t pmd_pte(pmd_t pmd) /* Yields the page frame number (PFN) of a page table entry */ static inline unsigned long pte_pfn(pte_t pte) { - return (pte_val(pte) >> _PAGE_PFN_SHIFT); + return ((pte_val(pte) & _PAGE_CHG_MASK) >> _PAGE_PFN_SHIFT); } #define pte_page(x) pfn_to_page(pte_pfn(x)) -- 2.7.4