Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp4934046yba; Wed, 10 Apr 2019 07:57:39 -0700 (PDT) X-Google-Smtp-Source: APXvYqwnuBUtx3tqDYstmKK3Hc2afoTGEpz59znPTcqSSjVv3z3ca51zcMIMyb/+IT5m3siygTj3 X-Received: by 2002:a17:902:2a:: with SMTP id 39mr4072767pla.64.1554908259157; Wed, 10 Apr 2019 07:57:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554908259; cv=none; d=google.com; s=arc-20160816; b=xAcMoTFkFl/LxqpW8hBUJPi2EeECdmCQt/ooFqLb61sgNKc5XmRL8e3tnZsMR3oAOI QZNUw+4F2Lo5Fb20ZjoRBpyV+ctk0iuKgeo8sDwbqBBStuj7yaqk/LEYsrbb7BIuzb+u RwTBBhaISdP7b7d6smj7Vq3/ZoXhtFhc5bPZCI0+fjcCKJdcww9P969VbT8pjD0ZZEkg Fnkv5pjGoyJ0EtLhqcEK+2l4JOjMJ6SgjyfScV0zeOluUrOYG1Sxau00ryzB0eTYNfzF qdmoTvhSDawkcerrUjiH5QiQKcMwUEfBAGj0TpF2uCg/NgqlFB6FIsztgfBReXYsyM14 DC4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=6Pfr8BVyDlTGVnhBhPR1Vhm8qmqyUDDDwFfasXLaybI=; b=SHhN/wOImTF8Jwn3BDZ2ONrlN9iR6xg+EzBoHjKMWOgKIGk9zhNh9jlyKP3wS2d+7+ DsVrQWU+KoasKKYZCDsKf0ZYz4M/YpRmgiblk3qxwDkcTB9Vavpc0KhrFnauX3mXFxmx TNowzBUww/ek5rhuT7BTavWXjYdA/Lm3c4WRBStXRePaSDqL+FiuGQYRXiq8Gm0ZYg6z Pma7jNRz+TMrJVpwYOZWBC3JxULv9kU5S3TtjUa72svKwddyhzwJepjldEMkEiR0unjf v7QQPWUomAgjhtm1VmEJIUjSIgUMnpxCELCYVGS5FXAoXHN0lk1NNoHzZ89hM2dWyiA+ v9mw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m13si32335558pga.331.2019.04.10.07.57.22; Wed, 10 Apr 2019 07:57:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732928AbfDJO4o (ORCPT + 99 others); Wed, 10 Apr 2019 10:56:44 -0400 Received: from foss.arm.com ([217.140.101.70]:56178 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732883AbfDJO4n (ORCPT ); Wed, 10 Apr 2019 10:56:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6FA2D374; Wed, 10 Apr 2019 07:56:43 -0700 (PDT) Received: from [10.1.196.69] (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F11813F557; Wed, 10 Apr 2019 07:56:39 -0700 (PDT) Subject: Re: [PATCH v8 00/20] Convert x86 & arm64 to use generic page walk To: linux-mm@kvack.org Cc: Mark Rutland , x86@kernel.org, Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Borislav Petkov , Andy Lutomirski , "H. Peter Anvin" , James Morse , Thomas Gleixner , Andrew Morton , linux-arm-kernel@lists.infradead.org, "Liang, Kan" References: <20190403141627.11664-1-steven.price@arm.com> From: Steven Price Message-ID: <4e804c87-1788-8903-ccc9-55953aa6da36@arm.com> Date: Wed, 10 Apr 2019 15:56:38 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: <20190403141627.11664-1-steven.price@arm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi all, Gentle ping: who can take this? Is there anything blocking this series? Thanks, Steve On 03/04/2019 15:16, Steven Price wrote: > Most architectures current have a debugfs file for dumping the kernel > page tables. Currently each architecture has to implement custom > functions for walking the page tables because the generic > walk_page_range() function is unable to walk the page tables used by the > kernel. > > This series extends the capabilities of walk_page_range() so that it can > deal with the page tables of the kernel (which have no VMAs and can > contain larger huge pages than exist for user space). x86 and arm64 are > then converted to make use of walk_page_range() removing the custom page > table walkers. > > To enable a generic page table walker to walk the unusual mappings of > the kernel we need to implement a set of functions which let us know > when the walker has reached the leaf entry. Since arm, powerpc, s390, > sparc and x86 all have p?d_large macros lets standardise on that and > implement those that are missing. > > Potentially future changes could unify the implementations of the > debugfs walkers further, moving the common functionality into common > code. This would require a common way of handling the effective > permissions (currently implemented only for x86) along with a per-arch > way of formatting the page table information for debugfs. One > immediate benefit would be getting the KASAN speed up optimisation in > arm64 (and other arches) which is currently only implemented for x86. > > Also available as a git tree: > git://linux-arm.org/linux-sp.git walk_page_range/v8 > > Changes since v7: > https://lore.kernel.org/lkml/20190328152104.23106-1-steven.price@arm.com/T/ > * Updated commit message in patch 2 to clarify that we rely on the page > tables being walked to be the same page size/depth as the kernel's > (since this confused me earlier today). > > Changes since v6: > https://lore.kernel.org/lkml/20190326162624.20736-1-steven.price@arm.com/T/ > * Split the changes for powerpc. pmd_large() is now added in patch 4 > patch, and pmd_is_leaf() removed in patch 5. > > Changes since v5: > https://lore.kernel.org/lkml/20190321141953.31960-1-steven.price@arm.com/T/ > * Updated comment for struct mm_walk based on Mike Rapoport's > suggestion > > Changes since v4: > https://lore.kernel.org/lkml/20190306155031.4291-1-steven.price@arm.com/T/ > * Correctly force result to a boolean in p?d_large for powerpc. > * Added Acked-bys > * Rebased onto v5.1-rc1 > > Changes since v3: > https://lore.kernel.org/lkml/20190227170608.27963-1-steven.price@arm.com/T/ > * Restored the generic macros, only implement p?d_large() for > architectures that have support for large pages. This also means > adding dummy #defines for architectures that define p?d_large as > static inline to avoid picking up the generic macro. > * Drop the 'depth' argument from pte_hole > * Because we no longer have the depth for holes, we also drop support > in x86 for showing missing pages in debugfs. See discussion below: > https://lore.kernel.org/lkml/26df02dd-c54e-ea91-bdd1-0a4aad3a30ac@arm.com/ > * mips: only define p?d_large when _PAGE_HUGE is defined. > > Changes since v2: > https://lore.kernel.org/lkml/20190221113502.54153-1-steven.price@arm.com/T/ > * Rather than attemping to provide generic macros, actually implement > p?d_large() for each architecture. > > Changes since v1: > https://lore.kernel.org/lkml/20190215170235.23360-1-steven.price@arm.com/T/ > * Added p4d_large() macro > * Comments to explain p?d_large() macro semantics > * Expanded comment for pte_hole() callback to explain mapping between > depth and P?D > * Handle folded page tables at all levels, so depth from pte_hole() > ignores folding at any level (see real_depth() function in > mm/pagewalk.c) > > Steven Price (20): > arc: mm: Add p?d_large() definitions > arm64: mm: Add p?d_large() definitions > mips: mm: Add p?d_large() definitions > powerpc: mm: Add p?d_large() definitions > KVM: PPC: Book3S HV: Remove pmd_is_leaf() > riscv: mm: Add p?d_large() definitions > s390: mm: Add p?d_large() definitions > sparc: mm: Add p?d_large() definitions > x86: mm: Add p?d_large() definitions > mm: Add generic p?d_large() macros > mm: pagewalk: Add p4d_entry() and pgd_entry() > mm: pagewalk: Allow walking without vma > mm: pagewalk: Add test_p?d callbacks > arm64: mm: Convert mm/dump.c to use walk_page_range() > x86: mm: Don't display pages which aren't present in debugfs > x86: mm: Point to struct seq_file from struct pg_state > x86: mm+efi: Convert ptdump_walk_pgd_level() to take a mm_struct > x86: mm: Convert ptdump_walk_pgd_level_debugfs() to take an mm_struct > x86: mm: Convert ptdump_walk_pgd_level_core() to take an mm_struct > x86: mm: Convert dump_pagetables to use walk_page_range > > arch/arc/include/asm/pgtable.h | 1 + > arch/arm64/include/asm/pgtable.h | 2 + > arch/arm64/mm/dump.c | 117 +++---- > arch/mips/include/asm/pgtable-64.h | 8 + > arch/powerpc/include/asm/book3s/64/pgtable.h | 30 +- > arch/powerpc/kvm/book3s_64_mmu_radix.c | 12 +- > arch/riscv/include/asm/pgtable-64.h | 7 + > arch/riscv/include/asm/pgtable.h | 7 + > arch/s390/include/asm/pgtable.h | 2 + > arch/sparc/include/asm/pgtable_64.h | 2 + > arch/x86/include/asm/pgtable.h | 10 +- > arch/x86/mm/debug_pagetables.c | 8 +- > arch/x86/mm/dump_pagetables.c | 347 ++++++++++--------- > arch/x86/platform/efi/efi_32.c | 2 +- > arch/x86/platform/efi/efi_64.c | 4 +- > include/asm-generic/pgtable.h | 19 + > include/linux/mm.h | 26 +- > mm/pagewalk.c | 76 +++- > 18 files changed, 407 insertions(+), 273 deletions(-) >