Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp2996092ybt; Mon, 29 Jun 2020 12:25:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwhsH9jBk5tAIk6mp7vdDBoPEjyAahuRAcKJPVgkinK7FVBpBmhl9Zu7lNlb3AoiR1/QZbY X-Received: by 2002:a50:e04e:: with SMTP id g14mr14866390edl.352.1593458725856; Mon, 29 Jun 2020 12:25:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593458725; cv=none; d=google.com; s=arc-20160816; b=dgyPSaZ+sWY9OSpeNtezadnIK1BKrRZyHX/e2qZKbAJVg0z5nBt+E2ZYMIaAeilgRn Zr7OK0ooCEW3d5EoNX3ZqcM6675TAeQU998l911suedtVQ8gkIwxGj7QyHgyGkJ6TZSC HDeTmFs/40yrL5vBNqwKa7RU1ydzz2uyPpSykWDX+4KCrQiHjCPqIJfEdfSiUKeQc7Vl H6sRPAiwAUX9QAwwNz3B03M2d4ZAMBlmOIMeaObzJN8SZzSMApX/taznH0ym2W2xQmPX izIVNCo0eptrQ2F5lcsy8GXzdkeQQ1+47qArMPeSS9Q+dQYuJFj4yXyB4CC+y0PJu1TA 5wzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=DkaM+YKVldfoOh//M5paJJtkDyUyjTBK9GdIXdAL92A=; b=RTlfwKxVdwDRdSRc6ucHdEKz+2e75iNzKqsSOV9xENcL/XZXrDlrz1yGmIatznSAqG vMv3owr2z5gTLH2fGQvPTMuFYFFqATt27WT92JCeTesqtMrex5ehltbfa4quy30QfMuY lD891PwVnWKeUwgYieXil+7mf/6nIyknG4IpMUCaqfskHp85KVLNdwa6TjMfWxn/6oNm sdceXLSOsy+T1mCVYqdmwuZ/PcWkH+vLsbPsTZLrEIAbIOTuZ/nfJX3Z+sv6S0BZerFr GV40uB895ZC8D0C/4izAkJQvDxGgpMxkJ8dN2uLB2SyzmTxBDobKqsgiudgoA2qieXG8 IFRw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h21si270933edw.233.2020.06.29.12.25.02; Mon, 29 Jun 2020 12:25:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732468AbgF2TY7 (ORCPT + 99 others); Mon, 29 Jun 2020 15:24:59 -0400 Received: from foss.arm.com ([217.140.110.172]:40110 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732408AbgF2TY4 (ORCPT ); Mon, 29 Jun 2020 15:24:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3BD271FB; Mon, 29 Jun 2020 01:09:48 -0700 (PDT) Received: from [10.163.83.176] (unknown [10.163.83.176]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F0F113F71E; Mon, 29 Jun 2020 01:09:38 -0700 (PDT) Subject: Re: [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers To: Christophe Leroy , linux-mm@kvack.org Cc: christophe.leroy@c-s.fr, ziy@nvidia.com, gerald.schaefer@de.ibm.com, Andrew Morton , Mike Rapoport , Vineet Gupta , Catalin Marinas , Will Deacon , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "Kirill A . Shutemov" , Paul Walmsley , Palmer Dabbelt , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org References: <1592192277-8421-1-git-send-email-anshuman.khandual@arm.com> <1592192277-8421-3-git-send-email-anshuman.khandual@arm.com> <6da177e6-9219-9ccf-a402-f4293c7564f7@csgroup.eu> From: Anshuman Khandual Message-ID: Date: Mon, 29 Jun 2020 13:39:28 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <6da177e6-9219-9ccf-a402-f4293c7564f7@csgroup.eu> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/27/2020 12:48 PM, Christophe Leroy wrote: > Le 15/06/2020 à 05:37, Anshuman Khandual a écrit : >> This adds new tests validating for these following arch advanced page table >> helpers. These tests create and test specific mapping types at various page >> table levels. >> >> 1. pxxp_set_wrprotect() >> 2. pxxp_get_and_clear() >> 3. pxxp_set_access_flags() >> 4. pxxp_get_and_clear_full() >> 5. pxxp_test_and_clear_young() >> 6. pxx_leaf() >> 7. pxx_set_huge() >> 8. pxx_(clear|mk)_savedwrite() >> 9. huge_pxxp_xxx() >> >> Cc: Andrew Morton >> Cc: Mike Rapoport >> Cc: Vineet Gupta >> Cc: Catalin Marinas >> Cc: Will Deacon >> Cc: Benjamin Herrenschmidt >> Cc: Paul Mackerras >> Cc: Michael Ellerman >> Cc: Heiko Carstens >> Cc: Vasily Gorbik >> Cc: Christian Borntraeger >> Cc: Thomas Gleixner >> Cc: Ingo Molnar >> Cc: Borislav Petkov >> Cc: "H. Peter Anvin" >> Cc: Kirill A. Shutemov >> Cc: Paul Walmsley >> Cc: Palmer Dabbelt >> Cc: linux-snps-arc@lists.infradead.org >> Cc: linux-arm-kernel@lists.infradead.org >> Cc: linuxppc-dev@lists.ozlabs.org >> Cc: linux-s390@vger.kernel.org >> Cc: linux-riscv@lists.infradead.org >> Cc: x86@kernel.org >> Cc: linux-mm@kvack.org >> Cc: linux-arch@vger.kernel.org >> Cc: linux-kernel@vger.kernel.org >> Suggested-by: Catalin Marinas >> Signed-off-by: Anshuman Khandual >> --- >>   mm/debug_vm_pgtable.c | 306 ++++++++++++++++++++++++++++++++++++++++++ >>   1 file changed, 306 insertions(+) >> >> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c >> index ffa163d4c63c..e3f9f8317a98 100644 >> --- a/mm/debug_vm_pgtable.c >> +++ b/mm/debug_vm_pgtable.c >> @@ -21,6 +21,7 @@ >>   #include >>   #include >>   #include >> +#include >>   #include >>   #include >>   #include >> @@ -28,6 +29,7 @@ >>   #include >>   #include >>   #include >> +#include >>     #define VMFLAGS    (VM_READ|VM_WRITE|VM_EXEC) >>   @@ -55,6 +57,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot) >>       WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte)))); >>   } >>   +static void __init pte_advanced_tests(struct mm_struct *mm, >> +            struct vm_area_struct *vma, pte_t *ptep, >> +            unsigned long pfn, unsigned long vaddr, pgprot_t prot) >> +{ >> +    pte_t pte = pfn_pte(pfn, prot); >> + >> +    pte = pfn_pte(pfn, prot); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_set_wrprotect(mm, vaddr, ptep); >> +    pte = READ_ONCE(*ptep); > > same > >> +    WARN_ON(pte_write(pte)); >> + >> +    pte = pfn_pte(pfn, prot); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_get_and_clear(mm, vaddr, ptep); >> +    pte = READ_ONCE(*ptep); > > same > >> +    WARN_ON(!pte_none(pte)); >> + >> +    pte = pfn_pte(pfn, prot); >> +    pte = pte_wrprotect(pte); >> +    pte = pte_mkclean(pte); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    pte = pte_mkwrite(pte); >> +    pte = pte_mkdirty(pte); >> +    ptep_set_access_flags(vma, vaddr, ptep, pte, 1); >> +    pte = READ_ONCE(*ptep); > > same > >> +    WARN_ON(!(pte_write(pte) && pte_dirty(pte))); >> + >> +    pte = pfn_pte(pfn, prot); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_get_and_clear_full(mm, vaddr, ptep, 1); >> +    pte = READ_ONCE(*ptep); > > same > >> +    WARN_ON(!pte_none(pte)); >> + >> +    pte = pte_mkyoung(pte); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_test_and_clear_young(vma, vaddr, ptep); >> +    pte = READ_ONCE(*ptep); > > same > >> +    WARN_ON(pte_young(pte)); >> +} >> + >> +static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pte_t pte = pfn_pte(pfn, prot); >> + >> +    WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte)))); >> +    WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte)))); >> +} >>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE >>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) >>   { >> @@ -77,6 +127,89 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) >>       WARN_ON(!pmd_bad(pmd_mkhuge(pmd))); >>   } >>   +static void __init pmd_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pmd_t *pmdp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) >> +{ >> +    pmd_t pmd = pfn_pmd(pfn, prot); >> + >> +    if (!has_transparent_hugepage()) >> +        return; >> + >> +    /* Align the address wrt HPAGE_PMD_SIZE */ >> +    vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE; >> + >> +    pmd = pfn_pmd(pfn, prot); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_set_wrprotect(mm, vaddr, pmdp); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(pmd_write(pmd)); >> + >> +    pmd = pfn_pmd(pfn, prot); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_huge_get_and_clear(mm, vaddr, pmdp); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!pmd_none(pmd)); >> + >> +    pmd = pfn_pmd(pfn, prot); >> +    pmd = pmd_wrprotect(pmd); >> +    pmd = pmd_mkclean(pmd); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmd = pmd_mkwrite(pmd); >> +    pmd = pmd_mkdirty(pmd); >> +    pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd))); >> + >> +    pmd = pmd_mkhuge(pfn_pmd(pfn, prot)); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_huge_get_and_clear_full(vma, vaddr, pmdp, 1); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!pmd_none(pmd)); >> + >> +    pmd = pmd_mkyoung(pmd); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_test_and_clear_young(vma, vaddr, pmdp); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(pmd_young(pmd)); >> +} >> + >> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pmd_t pmd = pfn_pmd(pfn, prot); >> + >> +    /* >> +     * PMD based THP is a leaf entry. >> +     */ >> +    pmd = pmd_mkhuge(pmd); >> +    WARN_ON(!pmd_leaf(pmd)); >> +} >> + >> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot) >> +{ >> +    pmd_t pmd; >> + >> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP)) >> +        return; >> +    /* >> +     * X86 defined pmd_set_huge() verifies that the given >> +     * PMD is not a populated non-leaf entry. >> +     */ >> +    WRITE_ONCE(*pmdp, __pmd(0)); >> +    WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot)); >> +    WARN_ON(!pmd_clear_huge(pmdp)); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!pmd_none(pmd)); >> +} >> + >> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pmd_t pmd = pfn_pmd(pfn, prot); >> + >> +    WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd)))); >> +    WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd)))); >> +} >> + >>   #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD >>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) >>   { >> @@ -100,12 +233,115 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) >>        */ >>       WARN_ON(!pud_bad(pud_mkhuge(pud))); >>   } >> + >> +static void pud_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pud_t *pudp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) >> +{ >> +    pud_t pud = pfn_pud(pfn, prot); >> + >> +    if (!has_transparent_hugepage()) >> +        return; >> + >> +    /* Align the address wrt HPAGE_PUD_SIZE */ >> +    vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE; >> + >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_set_wrprotect(mm, vaddr, pudp); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(pud_write(pud)); >> + >> +#ifndef __PAGETABLE_PMD_FOLDED >> +    pud = pfn_pud(pfn, prot); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_huge_get_and_clear(mm, vaddr, pudp); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!pud_none(pud)); >> + >> +    pud = pfn_pud(pfn, prot); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!pud_none(pud)); >> +#endif /* __PAGETABLE_PMD_FOLDED */ >> +    pud = pfn_pud(pfn, prot); >> +    pud = pud_wrprotect(pud); >> +    pud = pud_mkclean(pud); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pud = pud_mkwrite(pud); >> +    pud = pud_mkdirty(pud); >> +    pudp_set_access_flags(vma, vaddr, pudp, pud, 1); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!(pud_write(pud) && pud_dirty(pud))); >> + >> +    pud = pud_mkyoung(pud); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_test_and_clear_young(vma, vaddr, pudp); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(pud_young(pud)); >> +} >> + >> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pud_t pud = pfn_pud(pfn, prot); >> + >> +    /* >> +     * PUD based THP is a leaf entry. >> +     */ >> +    pud = pud_mkhuge(pud); >> +    WARN_ON(!pud_leaf(pud)); >> +} >> + >> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) >> +{ >> +    pud_t pud; >> + >> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP)) >> +        return; >> +    /* >> +     * X86 defined pud_set_huge() verifies that the given >> +     * PUD is not a populated non-leaf entry. >> +     */ >> +    WRITE_ONCE(*pudp, __pud(0)); >> +    WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot)); >> +    WARN_ON(!pud_clear_huge(pudp)); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!pud_none(pud)); >> +} >>   #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ >>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { } >> +static void pud_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pud_t *pudp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) >> +{ >> +} >> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { } >> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) >> +{ >> +} >>   #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ >>   #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */ >>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { } >>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { } >> +static void __init pmd_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pmd_t *pmdp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) >> +{ >> +} >> +static void __init pud_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pud_t *pudp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) >> +{ >> +} >> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) { } >> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { } >> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot) >> +{ >> +} >> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) >> +{ >> +} >> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) { } >>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ >>     static void __init p4d_basic_tests(unsigned long pfn, pgprot_t prot) >> @@ -495,8 +731,56 @@ static void __init hugetlb_basic_tests(unsigned long pfn, pgprot_t prot) >>       WARN_ON(!pte_huge(pte_mkhuge(pte))); >>   #endif /* CONFIG_ARCH_WANT_GENERAL_HUGETLB */ >>   } >> + >> +static void __init hugetlb_advanced_tests(struct mm_struct *mm, >> +                      struct vm_area_struct *vma, >> +                      pte_t *ptep, unsigned long pfn, >> +                      unsigned long vaddr, pgprot_t prot) >> +{ >> +    struct page *page = pfn_to_page(pfn); >> +    pte_t pte = READ_ONCE(*ptep); > > Remplace with ptep_get() to avoid build failure on powerpc 8xx. Sure, will replace all open PTE pointer accesses with ptep_get().