Received: by 2002:a25:5b86:0:0:0:0:0 with SMTP id p128csp353316ybb; Thu, 28 Mar 2019 04:01:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqxULW5/S4gvjpc5uBOeFW3B3K2n8uEo95tZYxCnNTSMcIzQFHenmWKJe6CR6B6+9li+me4i X-Received: by 2002:a17:902:e701:: with SMTP id co1mr42496557plb.61.1553770911774; Thu, 28 Mar 2019 04:01:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553770911; cv=none; d=google.com; s=arc-20160816; b=SoHe1kGinhT/eEZu0iNTLJLwcXB1f0OJEJ72l//WriFbdK8LEqpOa7a941On5W0LTv ghSpMXIs+7v3bO/Xl0ZVQkVMGqjCXh4ckw6qHtAj3uSyepirVB5mr7jVal1MJW7Mlv1O MwxqiGBgZ0IdJxFM9q2vllz7NqHMPkulYRtG9jQvZZ+OiF/8K+Mysh1JkZZ8JcWbvury 2xElcIQ3CqCPqAt63YUeMMYlpOuRQ4UhIzo1tdWqdgFWtQkNw+wmZcUQquXNsJlRC7eU g1C9JePsQmI4uQnLiy7aWNiObjY8R5QsXgDDarZ8+mznSZFigHfuymLX6kipDnMOlD7o 9EfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=GX0GvPcm7aaP46p8tpbgu01T/ZJB63R8khw69cSRiKo=; b=w3eqYcH8DRLLd4Xs9pe2XBW5JrPtNdwr3FnZrDB0oGs+vuPw1Vgauf9cMj5kUqUX/Z igJ568Dv1YsaCg+jdwmG/fpO5f18NwbfLn4Ln1kf63FnUj+RigHdTxvaVlcHMqeA7MvY lkL8ZJ4fOTz7VFkABUiDSOJoUthNfC6rVpgYq8ihxA1q93lxJ8Mo1GKan5D+QvQWRn/8 LRH5yWvPf4J6hhlQoplnUwYvMu7FKYxdmeyi1tGITiLAKEkXjO5ipzAZfAryaIRvtLyG VecK4tVB1SLrEtUtf/2+sAET+vFXsoqkZ+7oJvAPeJm/OaPED6LYoC8PUOnq8xx7MNSS W0Zg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z10si3534091pgu.172.2019.03.28.04.01.36; Thu, 28 Mar 2019 04:01:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726429AbfC1LAw (ORCPT + 99 others); Thu, 28 Mar 2019 07:00:52 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:42636 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726170AbfC1LAs (ORCPT ); Thu, 28 Mar 2019 07:00:48 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C8DD415AB; Thu, 28 Mar 2019 04:00:47 -0700 (PDT) Received: from [10.1.196.69] (e112269-lin.cambridge.arm.com [10.1.196.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 04B733F59C; Thu, 28 Mar 2019 04:00:43 -0700 (PDT) Subject: Re: [PATCH v6 04/19] powerpc: mm: Add p?d_large() definitions To: Christophe Leroy , linux-mm@kvack.org Cc: Mark Rutland , x86@kernel.org, James Morse , Arnd Bergmann , Ard Biesheuvel , Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Ingo Molnar , Paul Mackerras , Andy Lutomirski , "H. Peter Anvin" , Borislav Petkov , Thomas Gleixner , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, "Liang, Kan" References: <20190326162624.20736-1-steven.price@arm.com> <20190326162624.20736-5-steven.price@arm.com> <8a2efe07-b99f-3caa-fab9-47e49043bf66@c-s.fr> From: Steven Price Message-ID: <2b7d32ce-f258-1b34-1dbf-3a05ea9a0f6b@arm.com> Date: Thu, 28 Mar 2019 11:00:42 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: <8a2efe07-b99f-3caa-fab9-47e49043bf66@c-s.fr> Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 26/03/2019 16:58, Christophe Leroy wrote: > > > Le 26/03/2019 à 17:26, Steven Price a écrit : >> walk_page_range() is going to be allowed to walk page tables other than >> those of user space. For this it needs to know when it has reached a >> 'leaf' entry in the page tables. This information is provided by the >> p?d_large() functions/macros. >> >> For powerpc pmd_large() was already implemented, so hoist it out of the >> CONFIG_TRANSPARENT_HUGEPAGE condition and implement the other levels. >> >> Also since we now have a pmd_large always implemented we can drop the >> pmd_is_leaf() function. > > Wouldn't it be better to drop the pmd_is_leaf() in a second patch ? Fair point, I'll split this patch. Thanks for the review, Steve > Christophe > >> >> CC: Benjamin Herrenschmidt >> CC: Paul Mackerras >> CC: Michael Ellerman >> CC: linuxppc-dev@lists.ozlabs.org >> CC: kvm-ppc@vger.kernel.org >> Signed-off-by: Steven Price >> --- >>   arch/powerpc/include/asm/book3s/64/pgtable.h | 30 ++++++++++++++------ >>   arch/powerpc/kvm/book3s_64_mmu_radix.c       | 12 ++------ >>   2 files changed, 24 insertions(+), 18 deletions(-) >> >> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h >> b/arch/powerpc/include/asm/book3s/64/pgtable.h >> index 581f91be9dd4..f6d1ac8b832e 100644 >> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h >> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h >> @@ -897,6 +897,12 @@ static inline int pud_present(pud_t pud) >>       return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT)); >>   } >>   +#define pud_large    pud_large >> +static inline int pud_large(pud_t pud) >> +{ >> +    return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE)); >> +} >> + >>   extern struct page *pud_page(pud_t pud); >>   extern struct page *pmd_page(pmd_t pmd); >>   static inline pte_t pud_pte(pud_t pud) >> @@ -940,6 +946,12 @@ static inline int pgd_present(pgd_t pgd) >>       return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT)); >>   } >>   +#define pgd_large    pgd_large >> +static inline int pgd_large(pgd_t pgd) >> +{ >> +    return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE)); >> +} >> + >>   static inline pte_t pgd_pte(pgd_t pgd) >>   { >>       return __pte_raw(pgd_raw(pgd)); >> @@ -1093,6 +1105,15 @@ static inline bool pmd_access_permitted(pmd_t >> pmd, bool write) >>       return pte_access_permitted(pmd_pte(pmd), write); >>   } >>   +#define pmd_large    pmd_large >> +/* >> + * returns true for pmd migration entries, THP, devmap, hugetlb >> + */ >> +static inline int pmd_large(pmd_t pmd) >> +{ >> +    return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); >> +} >> + >>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE >>   extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot); >>   extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot); >> @@ -1119,15 +1140,6 @@ pmd_hugepage_update(struct mm_struct *mm, >> unsigned long addr, pmd_t *pmdp, >>       return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set); >>   } >>   -/* >> - * returns true for pmd migration entries, THP, devmap, hugetlb >> - * But compile time dependent on THP config >> - */ >> -static inline int pmd_large(pmd_t pmd) >> -{ >> -    return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); >> -} >> - >>   static inline pmd_t pmd_mknotpresent(pmd_t pmd) >>   { >>       return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT); >> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c >> b/arch/powerpc/kvm/book3s_64_mmu_radix.c >> index f55ef071883f..1b57b4e3f819 100644 >> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c >> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c >> @@ -363,12 +363,6 @@ static void kvmppc_pte_free(pte_t *ptep) >>       kmem_cache_free(kvm_pte_cache, ptep); >>   } >>   -/* Like pmd_huge() and pmd_large(), but works regardless of config >> options */ >> -static inline int pmd_is_leaf(pmd_t pmd) >> -{ >> -    return !!(pmd_val(pmd) & _PAGE_PTE); >> -} >> - >>   static pmd_t *kvmppc_pmd_alloc(void) >>   { >>       return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL); >> @@ -460,7 +454,7 @@ static void kvmppc_unmap_free_pmd(struct kvm *kvm, >> pmd_t *pmd, bool full, >>       for (im = 0; im < PTRS_PER_PMD; ++im, ++p) { >>           if (!pmd_present(*p)) >>               continue; >> -        if (pmd_is_leaf(*p)) { >> +        if (pmd_large(*p)) { >>               if (full) { >>                   pmd_clear(p); >>               } else { >> @@ -593,7 +587,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t >> *pgtable, pte_t pte, >>       else if (level <= 1) >>           new_pmd = kvmppc_pmd_alloc(); >>   -    if (level == 0 && !(pmd && pmd_present(*pmd) && >> !pmd_is_leaf(*pmd))) >> +    if (level == 0 && !(pmd && pmd_present(*pmd) && !pmd_large(*pmd))) >>           new_ptep = kvmppc_pte_alloc(); >>         /* Check if we might have been invalidated; let the guest >> retry if so */ >> @@ -662,7 +656,7 @@ int kvmppc_create_pte(struct kvm *kvm, pgd_t >> *pgtable, pte_t pte, >>           new_pmd = NULL; >>       } >>       pmd = pmd_offset(pud, gpa); >> -    if (pmd_is_leaf(*pmd)) { >> +    if (pmd_large(*pmd)) { >>           unsigned long lgpa = gpa & PMD_MASK; >>             /* Check if we raced and someone else has set the same >> thing */ >> > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel