Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp627517ybx; Fri, 1 Nov 2019 08:43:30 -0700 (PDT) X-Google-Smtp-Source: APXvYqxL4kzNFJY/3Z+/EjooIw7OwRV3FQASoKMP/RevFpKlGM23WtAz+qIhh0xXq+TOYQDebolp X-Received: by 2002:aa7:cd5b:: with SMTP id v27mr13572848edw.174.1572623010467; Fri, 01 Nov 2019 08:43:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1572623010; cv=none; d=google.com; s=arc-20160816; b=U7NtLT2BGKrBBIr5qXrlZNs29Fy48+3h81Fd0FYD4S9H4Wo8B56PPpx7S+JGkGhRay era+dLIlMCAbwEeb+uCn3E+TkkZCYohI2EtPnhKaH32F98rYNkC189icvM+AV5BFFEDT qIYejCtjUdcLmdSSUzF5LhpYOqK/PF58V6JyI0df1YabWlVVQNQuYG7uTx2O8iYts8CO 32lPaCm9FEzh6XdDp/Xf5+fpF+nbRTgqTW4PcyzVlvwV0+dW4YekoVyzIYMEVKVdVTkw O2k7UrvEqc03vSQnUvjvpdIaM/tgrW57IqKAJIuRH9mloKta/kbQpdIkmInABx+3Xepk GCyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=OOJBZMcWC5RGV+sS1wjYdNxlaUpMHQV0lXDNdKCnN/Q=; b=fcOExKchbeakrbeq0xHYkKnudd9dWrSIoonNVSQcLJ+JRt0RFiu/7ZijmRHwve+Tfd E0ljrKeyGM6lmkTbNsaqdgiDDCUx0i9kveKa2Be6K8fUtRVLr2fKHcZVj3zFZxqfJARj pEKW4UZpN+1NDkCXvROUUlKugSfLgDhwG/dhYi0N3kBQ7pZofFyOm/tkkiM8Y75I0f/x CPI5hJ2svdqntfPDq83t6YiY7YhC5mqw4bxggb1btu6NZubmrDRcd2GQsDyrvRGj9hSD QnoaQ9BGHGX5rqj9lgSEuRmH1y/1QjHwHnjzOq4MXUw8rU1roUzdHUHRQ0z/sAoaO31m dAsA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e19si5957468ejd.24.2019.11.01.08.43.06; Fri, 01 Nov 2019 08:43:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728441AbfKAOKd (ORCPT + 99 others); Fri, 1 Nov 2019 10:10:33 -0400 Received: from foss.arm.com ([217.140.110.172]:36202 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728310AbfKAOK3 (ORCPT ); Fri, 1 Nov 2019 10:10:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 624747CD; Fri, 1 Nov 2019 07:10:29 -0700 (PDT) Received: from e112269-lin.cambridge.arm.com (e112269-lin.cambridge.arm.com [10.1.194.43]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B9B553F718; Fri, 1 Nov 2019 07:10:26 -0700 (PDT) From: Steven Price To: Andrew Morton , linux-mm@kvack.org Cc: Steven Price , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Rutland , "Liang, Kan" Subject: [PATCH v15 12/23] mm: pagewalk: Allow walking without vma Date: Fri, 1 Nov 2019 14:09:31 +0000 Message-Id: <20191101140942.51554-13-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191101140942.51554-1-steven.price@arm.com> References: <20191101140942.51554-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since 48684a65b4e3: "mm: pagewalk: fix misbehavior of walk_page_range for vma(VM_PFNMAP)", page_table_walk() will report any kernel area as a hole, because it lacks a vma. This means each arch has re-implemented page table walking when needed, for example in the per-arch ptdump walker. Remove the requirement to have a vma in the generic code and add a new function walk_page_range_novma() which ignores the VMAs and simply walks the page tables. Signed-off-by: Steven Price --- include/linux/pagewalk.h | 5 +++++ mm/pagewalk.c | 44 ++++++++++++++++++++++++++++++++-------- 2 files changed, 41 insertions(+), 8 deletions(-) diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index 12004b097eae..ed2bb399fac2 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -53,6 +53,7 @@ struct mm_walk_ops { * @ops: operation to call during the walk * @mm: mm_struct representing the target process of page table walk * @vma: vma currently walked (NULL if walking outside vmas) + * @no_vma: walk ignoring vmas (vma will always be NULL) * @private: private data for callbacks' usage * * (see the comment on walk_page_range() for more details) @@ -61,12 +62,16 @@ struct mm_walk { const struct mm_walk_ops *ops; struct mm_struct *mm; struct vm_area_struct *vma; + bool no_vma; void *private; }; int walk_page_range(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private); +int walk_page_range_novma(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private); diff --git a/mm/pagewalk.c b/mm/pagewalk.c index fc4d98a3a5a0..626e7fdb0508 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -38,7 +38,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, do { again: next = pmd_addr_end(addr, end); - if (pmd_none(*pmd) || !walk->vma) { + if (pmd_none(*pmd) || (!walk->vma && !walk->no_vma)) { if (ops->pte_hole) err = ops->pte_hole(addr, next, walk); if (err) @@ -61,9 +61,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, if (!ops->pte_entry) continue; - split_huge_pmd(walk->vma, pmd, addr); - if (pmd_trans_unstable(pmd)) - goto again; + if (walk->vma) { + split_huge_pmd(walk->vma, pmd, addr); + if (pmd_trans_unstable(pmd)) + goto again; + } else if (pmd_leaf(*pmd)) { + continue; + } + err = walk_pte_range(pmd, addr, next, walk); if (err) break; @@ -84,7 +89,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, do { again: next = pud_addr_end(addr, end); - if (pud_none(*pud) || !walk->vma) { + if (pud_none(*pud) || (!walk->vma && !walk->no_vma)) { if (ops->pte_hole) err = ops->pte_hole(addr, next, walk); if (err) @@ -98,9 +103,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, break; } - split_huge_pud(walk->vma, pud, addr); - if (pud_none(*pud)) - goto again; + if (walk->vma) { + split_huge_pud(walk->vma, pud, addr); + if (pud_none(*pud)) + goto again; + } else if (pud_leaf(*pud)) { + continue; + } if (ops->pmd_entry || ops->pte_entry) err = walk_pmd_range(pud, addr, next, walk); @@ -358,6 +367,25 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, return err; } +int walk_page_range_novma(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk = { + .ops = ops, + .mm = mm, + .private = private, + .no_vma = true + }; + + if (start >= end || !walk.mm) + return -EINVAL; + + lockdep_assert_held(&walk.mm->mmap_sem); + + return __walk_page_range(start, end, &walk); +} + int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private) { -- 2.20.1