Received: by 2002:ac0:aed5:0:0:0:0:0 with SMTP id t21csp4323872imb; Wed, 6 Mar 2019 10:31:36 -0800 (PST) X-Google-Smtp-Source: APXvYqzduh/XVfqrQnl36ui8y+bsFLXdn4UGHt6xWX8mnlaf4Kk7/3g5zDhj/zIMVYvp68WrZXOm X-Received: by 2002:a65:4608:: with SMTP id v8mr7662093pgq.9.1551897096282; Wed, 06 Mar 2019 10:31:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551897096; cv=none; d=google.com; s=arc-20160816; b=ix1VfGggshVSYTPCh0mc+qxnvHBD2SMoAbLQ5yCkmNJuZs97JKoudl+jb5ooIN70Te vkZjdcPQwdRFwvtqMn3eVFbx0hQEgIoFoHxaG/9cFdkdZW9GDWUqzh12/eDeMP5S0AkC bAHfigA1bjVGDY8NuUGPiuruJ1cGna3XTwsAfGaOXreEHoZpMDpgJFH2XwTb8UscH2am rXwqmIO8fI45eid2IMXmppmoM8/MDQRmQDAh3hGhVDgU7GYF11Qu2vK1PYMT4eJLkRdk Rv2mlH+a6/4C019a3xWfWZF+HY16P68EONgnaptNIVI6Y3R42FNZokXoUde5oXwZTvAO B/HQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=HrdRugGr2uKHHo/TiR13vk3LQ/wnulW6YNAfJqdWS6g=; b=BdNpriNNmxgeGY8XpLWrzSqfWNgceeCrR5N0HjkcDIXntKmCpQZ9mdIQelmzkFxowR E+AemfXPumPXMnyc5UA8rnQwNVwoQxxIG5YsrrpKSHdBNMkmTCn5WbiSvAsTd5t9UzwI WMHdQBx7YM3Gm1AHlnS0z6TRZ5FwXrnwM/zmoZ9Ic6hP+rVtxkqqihx7B6rGH8vzLJar LEdgJq1nHJ278wzqMLC7MhiYWPi6M7qR7LTyo4mcZHFAKAM33YcVSgAodwU+dsn/ITOD KB4HkVSfnYNEEoTBlcm5KIk8Q871jwxmcr/ZzZ+gxr2waKOEQtBvefOXVWhD1xj5G+0q 2WzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q14si2072350pls.204.2019.03.06.10.31.21; Wed, 06 Mar 2019 10:31:36 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726865AbfCFPwx (ORCPT + 99 others); Wed, 6 Mar 2019 10:52:53 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:34506 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730017AbfCFPvY (ORCPT ); Wed, 6 Mar 2019 10:51:24 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 90991174E; Wed, 6 Mar 2019 07:51:23 -0800 (PST) Received: from e112269-lin.arm.com (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 559333F703; Wed, 6 Mar 2019 07:51:20 -0800 (PST) From: Steven Price To: linux-mm@kvack.org Cc: Steven Price , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Dave Hansen , Ingo Molnar , James Morse , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Peter Zijlstra , Thomas Gleixner , Will Deacon , x86@kernel.org, "H. Peter Anvin" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Mark Rutland , "Liang, Kan" Subject: [PATCH v4 10/19] mm: pagewalk: Add p4d_entry() and pgd_entry() Date: Wed, 6 Mar 2019 15:50:22 +0000 Message-Id: <20190306155031.4291-11-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190306155031.4291-1-steven.price@arm.com> References: <20190306155031.4291-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org pgd_entry() and pud_entry() were removed by commit 0b1fbfe50006c410 ("mm/pagewalk: remove pgd_entry() and pud_entry()") because there were no users. We're about to add users so reintroduce them, along with p4d_entry() as we now have 5 levels of tables. Note that commit a00cc7d9dd93d66a ("mm, x86: add support for PUD-sized transparent hugepages") already re-added pud_entry() but with different semantics to the other callbacks. Since there have never been upstream users of this, revert the semantics back to match the other callbacks. This means pud_entry() is called for all entries, not just transparent huge pages. Signed-off-by: Steven Price --- include/linux/mm.h | 9 ++++++--- mm/pagewalk.c | 27 ++++++++++++++++----------- 2 files changed, 22 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 80bb6408fe73..1a4b1615d012 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1412,10 +1412,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, /** * mm_walk - callbacks for walk_page_range + * @pgd_entry: if set, called for each non-empty PGD (top-level) entry + * @p4d_entry: if set, called for each non-empty P4D (1st-level) entry * @pud_entry: if set, called for each non-empty PUD (2nd-level) entry - * this handler should only handle pud_trans_huge() puds. - * the pmd_entry or pte_entry callbacks will be used for - * regular PUDs. * @pmd_entry: if set, called for each non-empty PMD (3rd-level) entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to @@ -1435,6 +1434,10 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, * (see the comment on walk_page_range() for more details) */ struct mm_walk { + int (*pgd_entry)(pgd_t *pgd, unsigned long addr, + unsigned long next, struct mm_walk *walk); + int (*p4d_entry)(p4d_t *p4d, unsigned long addr, + unsigned long next, struct mm_walk *walk); int (*pud_entry)(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c3084ff2569d..98373a9f88b8 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -90,15 +90,9 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, } if (walk->pud_entry) { - spinlock_t *ptl = pud_trans_huge_lock(pud, walk->vma); - - if (ptl) { - err = walk->pud_entry(pud, addr, next, walk); - spin_unlock(ptl); - if (err) - break; - continue; - } + err = walk->pud_entry(pud, addr, next, walk); + if (err) + break; } split_huge_pud(walk->vma, pud, addr); @@ -131,7 +125,12 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->p4d_entry) { + err = walk->p4d_entry(p4d, addr, next, walk); + if (err) + break; + } + if (walk->pud_entry || walk->pmd_entry || walk->pte_entry) err = walk_pud_range(p4d, addr, next, walk); if (err) break; @@ -157,7 +156,13 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, break; continue; } - if (walk->pmd_entry || walk->pte_entry) + if (walk->pgd_entry) { + err = walk->pgd_entry(pgd, addr, next, walk); + if (err) + break; + } + if (walk->p4d_entry || walk->pud_entry || walk->pmd_entry || + walk->pte_entry) err = walk_p4d_range(pgd, addr, next, walk); if (err) break; -- 2.20.1