Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp4442712img; Tue, 26 Mar 2019 09:28:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqxaIn+CdUzDqpBIoP+ivGo4I/zXf5+ZwtwXffD5xBB7I6qwgYkW2RjOS5/ueTvnXxNmJFmj X-Received: by 2002:a63:3d85:: with SMTP id k127mr14507275pga.152.1553617701958; Tue, 26 Mar 2019 09:28:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553617701; cv=none; d=google.com; s=arc-20160816; b=oKPXS3stEJ+VuTZKnK+sK3hF6fnnkYpXF/ggHmMDCtKP7YMRyMDtnP+OZclUmO6Z9g CjYvtF/koNq5GdQqPzTT84pMfMkhSo8kzhzPMJySZNrKQTy2KMAdvKdZZcX6gZ8EztDh p4HZXhL8PvsP4FBZIocGW8MWja0cqT5j6lWhG68hH7IoE2PjM6wxPDIXSJAFKP5+Oses GlCU53ShyxZHkJo6dcF/66NZxtxASTi9ZNGFgn2DLon65KTcT7+msr2BNNcegcn0fqFp 4+tYHHKdUc2AS+fGPZU618g+rFZLedEaOaXJtcynxCBnl6+Dan0Ip0mxUmWx78a6SJyQ aHww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=3QT6OWyC9JAMeDMFDYFJBPhytDtBHqA6u+Xz2reowKo=; b=XE6Q+ImIHomxhpYyDDEb/G+Bf0nO3iwg9nrqW5fCVp/Gag2lxvHA1sRuGI6RUJrN1R GwEMWCzEdUB0X4pozIZKmySgFU4uFlOmPh4AsJctFbAGcuyKYpx92cI/aUU/CsrEI6uf mO5fWLuDHzu2rPXFImJ94Y5/QsGLffbijf7AKw4RZjZN/0ng84JmRXrAND3CdJZYoDLH 5pyHikbyVnx+xVB+8VlbmWttiHQECS8H4z3Hdh2CFnU0/S01N19hLOlKsDfLcgWiUTsp SvzmxOk2R3ER7Re5rRp9Jl19ychxzFEUcJtQ2Syq4Cd2RA1j5lJWAyANM5bJriIVTS8/ DLmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h23si16195128pgv.190.2019.03.26.09.28.06; Tue, 26 Mar 2019 09:28:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732215AbfCZQ1H (ORCPT + 99 others); Tue, 26 Mar 2019 12:27:07 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:39772 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731679AbfCZQ1G (ORCPT ); Tue, 26 Mar 2019 12:27:06 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E9331596; Tue, 26 Mar 2019 09:27:06 -0700 (PDT) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F36943F614; Tue, 26 Mar 2019 09:27:02 -0700 (PDT) From: Steven Price To: linux-mm@kvack.org Cc: Steven Price , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Rutland , "Liang, Kan" Subject: [PATCH v6 08/19] x86: mm: Add p?d_large() definitions Date: Tue, 26 Mar 2019 16:26:13 +0000 Message-Id: <20190326162624.20736-9-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190326162624.20736-1-steven.price@arm.com> References: <20190326162624.20736-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org walk_page_range() is going to be allowed to walk page tables other than those of user space. For this it needs to know when it has reached a 'leaf' entry in the page tables. This information is provided by the p?d_large() functions/macros. For x86 we already have static inline functions, so simply add #defines to prevent the generic versions (added in a later patch) from being picked up. We also need to add corresponding #undefs in dump_pagetables.c. This code will be removed when x86 is switched over to using the generic pagewalk code in a later patch. Signed-off-by: Steven Price --- arch/x86/include/asm/pgtable.h | 5 +++++ arch/x86/mm/dump_pagetables.c | 3 +++ 2 files changed, 8 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 2779ace16d23..0dd04cf6ebeb 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -222,6 +222,7 @@ static inline unsigned long pgd_pfn(pgd_t pgd) return (pgd_val(pgd) & PTE_PFN_MASK) >> PAGE_SHIFT; } +#define p4d_large p4d_large static inline int p4d_large(p4d_t p4d) { /* No 512 GiB pages yet */ @@ -230,6 +231,7 @@ static inline int p4d_large(p4d_t p4d) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) +#define pmd_large pmd_large static inline int pmd_large(pmd_t pte) { return pmd_flags(pte) & _PAGE_PSE; @@ -857,6 +859,7 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } +#define pud_large pud_large static inline int pud_large(pud_t pud) { return (pud_val(pud) & (_PAGE_PSE | _PAGE_PRESENT)) == @@ -868,6 +871,7 @@ static inline int pud_bad(pud_t pud) return (pud_flags(pud) & ~(_KERNPG_TABLE | _PAGE_USER)) != 0; } #else +#define pud_large pud_large static inline int pud_large(pud_t pud) { return 0; @@ -1213,6 +1217,7 @@ static inline bool pgdp_maps_userspace(void *__ptr) return (((ptr & ~PAGE_MASK) / sizeof(pgd_t)) < PGD_KERNEL_START); } +#define pgd_large pgd_large static inline int pgd_large(pgd_t pgd) { return 0; } #ifdef CONFIG_PAGE_TABLE_ISOLATION diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index ee8f8ab46941..ca270fb00805 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -432,6 +432,7 @@ static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, #else #define walk_pmd_level(m,s,a,e,p) walk_pte_level(m,s,__pmd(pud_val(a)),e,p) +#undef pud_large #define pud_large(a) pmd_large(__pmd(pud_val(a))) #define pud_none(a) pmd_none(__pmd(pud_val(a))) #endif @@ -467,6 +468,7 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, #else #define walk_pud_level(m,s,a,e,p) walk_pmd_level(m,s,__pud(p4d_val(a)),e,p) +#undef p4d_large #define p4d_large(a) pud_large(__pud(p4d_val(a))) #define p4d_none(a) pud_none(__pud(p4d_val(a))) #endif @@ -501,6 +503,7 @@ static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, } } +#undef pgd_large #define pgd_large(a) (pgtable_l5_enabled() ? pgd_large(a) : p4d_large(__p4d(pgd_val(a)))) #define pgd_none(a) (pgtable_l5_enabled() ? pgd_none(a) : p4d_none(__p4d(pgd_val(a)))) -- 2.20.1