Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp2975625ybt; Mon, 29 Jun 2020 11:55:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz7VQTYqPyIfliypPQWCMGI+67JbKIWvQD5qKqtP+jVVSmHCe97CNPoB0oGUjwIdkg5CzV6 X-Received: by 2002:a17:906:4145:: with SMTP id l5mr14927788ejk.334.1593456931460; Mon, 29 Jun 2020 11:55:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593456931; cv=none; d=google.com; s=arc-20160816; b=JjdNn4jFtXDjIYK7ZDaJ6Yk7ok4CB8XL64QuU2uouHmZb95OjNy8upiRiDGnRiWIOG YT9frCfOMOdbyTBq7nY88CtlaYAehgQ4Pn2lPWILem/mKJuitBkApx8ZT5EPWvlk/6jA +V9rAhJV5E1U/baJ6UkWNjmO9GYJ1k66slmMEEDwFVJrtG9JuSzVSJbXXXfYYkkoiQ2R qvws+b505aAmPWitym/WGz7hHGqirvoxErycNcgnVcy/otucB10mH9C43XLv+C7mjoTB HTx4QB6X142RnywuaTSal2By+XHSKaFXfDwmv9QPzFukcQ672RP7tvIg+PmcAi5bJe0+ tUHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=xy1GGBGjGrnVUqMSnEA1Hp+G4OoP7eldtvQt2a6Vv0g=; b=JTyLJx+5dTqMQAmhhlSXde/dCTTgw8lT5gn29DTQB5eMg0iJFsRsm/vYeaNrOb0IzZ dANPhg0NA87BEUAm9B97U4ghHzJi0z48oDjlU7IpOrQl75x1KlSzB8oLtIti9krFDKeN IsUBmtw+PF1w72X8FznWBmcF/IdbCx4CqXofsIbQJIoVtS2cWtnMeZO6/gXBmeIwLB9j bLz6ME+XOpXvmTWsjyJZ9NunfB4e56NiiP71tyFUMLX3j6/kgN+/Vpbd4sUbXyoN4iJR TpNmghw9Bh8oWyf3qJpPgVywov2pgZb9MWAPG3KynjW+bt/IVJDjrOP0GrJlTmL4sss2 qL1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k91si219850edc.296.2020.06.29.11.55.07; Mon, 29 Jun 2020 11:55:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729918AbgF2SzD (ORCPT + 99 others); Mon, 29 Jun 2020 14:55:03 -0400 Received: from foss.arm.com ([217.140.110.172]:36804 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729295AbgF2Sy5 (ORCPT ); Mon, 29 Jun 2020 14:54:57 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9498131B; Mon, 29 Jun 2020 01:15:58 -0700 (PDT) Received: from [10.163.83.176] (unknown [10.163.83.176]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9CA863F71E; Mon, 29 Jun 2020 01:15:47 -0700 (PDT) Subject: Re: [PATCH V3 2/4] mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers To: Christophe Leroy , linux-mm@kvack.org Cc: ziy@nvidia.com, gerald.schaefer@de.ibm.com, Andrew Morton , Mike Rapoport , Vineet Gupta , Catalin Marinas , Will Deacon , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "Kirill A . Shutemov" , Paul Walmsley , Palmer Dabbelt , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org References: <1592192277-8421-1-git-send-email-anshuman.khandual@arm.com> <1592192277-8421-3-git-send-email-anshuman.khandual@arm.com> <4da41eee-5ce0-2a5e-40eb-4424655b3489@csgroup.eu> From: Anshuman Khandual Message-ID: <1a6138ca-40b0-5076-2f09-4ce6b7ee8d36@arm.com> Date: Mon, 29 Jun 2020 13:45:37 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <4da41eee-5ce0-2a5e-40eb-4424655b3489@csgroup.eu> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/27/2020 12:56 PM, Christophe Leroy wrote: > > > Le 15/06/2020 à 05:37, Anshuman Khandual a écrit : >> This adds new tests validating for these following arch advanced page table >> helpers. These tests create and test specific mapping types at various page >> table levels. >> >> 1. pxxp_set_wrprotect() >> 2. pxxp_get_and_clear() >> 3. pxxp_set_access_flags() >> 4. pxxp_get_and_clear_full() >> 5. pxxp_test_and_clear_young() >> 6. pxx_leaf() >> 7. pxx_set_huge() >> 8. pxx_(clear|mk)_savedwrite() >> 9. huge_pxxp_xxx() >> >> Cc: Andrew Morton >> Cc: Mike Rapoport >> Cc: Vineet Gupta >> Cc: Catalin Marinas >> Cc: Will Deacon >> Cc: Benjamin Herrenschmidt >> Cc: Paul Mackerras >> Cc: Michael Ellerman >> Cc: Heiko Carstens >> Cc: Vasily Gorbik >> Cc: Christian Borntraeger >> Cc: Thomas Gleixner >> Cc: Ingo Molnar >> Cc: Borislav Petkov >> Cc: "H. Peter Anvin" >> Cc: Kirill A. Shutemov >> Cc: Paul Walmsley >> Cc: Palmer Dabbelt >> Cc: linux-snps-arc@lists.infradead.org >> Cc: linux-arm-kernel@lists.infradead.org >> Cc: linuxppc-dev@lists.ozlabs.org >> Cc: linux-s390@vger.kernel.org >> Cc: linux-riscv@lists.infradead.org >> Cc: x86@kernel.org >> Cc: linux-mm@kvack.org >> Cc: linux-arch@vger.kernel.org >> Cc: linux-kernel@vger.kernel.org >> Suggested-by: Catalin Marinas >> Signed-off-by: Anshuman Khandual >> --- >>   mm/debug_vm_pgtable.c | 306 ++++++++++++++++++++++++++++++++++++++++++ >>   1 file changed, 306 insertions(+) >> >> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c >> index ffa163d4c63c..e3f9f8317a98 100644 >> --- a/mm/debug_vm_pgtable.c >> +++ b/mm/debug_vm_pgtable.c >> @@ -21,6 +21,7 @@ >>   #include >>   #include >>   #include >> +#include >>   #include >>   #include >>   #include >> @@ -28,6 +29,7 @@ >>   #include >>   #include >>   #include >> +#include >>     #define VMFLAGS    (VM_READ|VM_WRITE|VM_EXEC) >>   @@ -55,6 +57,54 @@ static void __init pte_basic_tests(unsigned long pfn, pgprot_t prot) >>       WARN_ON(pte_write(pte_wrprotect(pte_mkwrite(pte)))); >>   } >>   +static void __init pte_advanced_tests(struct mm_struct *mm, >> +            struct vm_area_struct *vma, pte_t *ptep, >> +            unsigned long pfn, unsigned long vaddr, pgprot_t prot) > > Align args properly. > >> +{ >> +    pte_t pte = pfn_pte(pfn, prot); >> + >> +    pte = pfn_pte(pfn, prot); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_set_wrprotect(mm, vaddr, ptep); >> +    pte = READ_ONCE(*ptep); >> +    WARN_ON(pte_write(pte)); >> + >> +    pte = pfn_pte(pfn, prot); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_get_and_clear(mm, vaddr, ptep); >> +    pte = READ_ONCE(*ptep); >> +    WARN_ON(!pte_none(pte)); >> + >> +    pte = pfn_pte(pfn, prot); >> +    pte = pte_wrprotect(pte); >> +    pte = pte_mkclean(pte); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    pte = pte_mkwrite(pte); >> +    pte = pte_mkdirty(pte); >> +    ptep_set_access_flags(vma, vaddr, ptep, pte, 1); >> +    pte = READ_ONCE(*ptep); >> +    WARN_ON(!(pte_write(pte) && pte_dirty(pte))); >> + >> +    pte = pfn_pte(pfn, prot); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_get_and_clear_full(mm, vaddr, ptep, 1); >> +    pte = READ_ONCE(*ptep); >> +    WARN_ON(!pte_none(pte)); >> + >> +    pte = pte_mkyoung(pte); >> +    set_pte_at(mm, vaddr, ptep, pte); >> +    ptep_test_and_clear_young(vma, vaddr, ptep); >> +    pte = READ_ONCE(*ptep); >> +    WARN_ON(pte_young(pte)); >> +} >> + >> +static void __init pte_savedwrite_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pte_t pte = pfn_pte(pfn, prot); >> + >> +    WARN_ON(!pte_savedwrite(pte_mk_savedwrite(pte_clear_savedwrite(pte)))); >> +    WARN_ON(pte_savedwrite(pte_clear_savedwrite(pte_mk_savedwrite(pte)))); >> +} >>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE >>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) >>   { >> @@ -77,6 +127,89 @@ static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) >>       WARN_ON(!pmd_bad(pmd_mkhuge(pmd))); >>   } >>   +static void __init pmd_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pmd_t *pmdp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) > > Align args properly > >> +{ >> +    pmd_t pmd = pfn_pmd(pfn, prot); >> + >> +    if (!has_transparent_hugepage()) >> +        return; >> + >> +    /* Align the address wrt HPAGE_PMD_SIZE */ >> +    vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE; >> + >> +    pmd = pfn_pmd(pfn, prot); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_set_wrprotect(mm, vaddr, pmdp); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(pmd_write(pmd)); >> + >> +    pmd = pfn_pmd(pfn, prot); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_huge_get_and_clear(mm, vaddr, pmdp); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!pmd_none(pmd)); >> + >> +    pmd = pfn_pmd(pfn, prot); >> +    pmd = pmd_wrprotect(pmd); >> +    pmd = pmd_mkclean(pmd); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmd = pmd_mkwrite(pmd); >> +    pmd = pmd_mkdirty(pmd); >> +    pmdp_set_access_flags(vma, vaddr, pmdp, pmd, 1); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!(pmd_write(pmd) && pmd_dirty(pmd))); >> + >> +    pmd = pmd_mkhuge(pfn_pmd(pfn, prot)); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_huge_get_and_clear_full(vma, vaddr, pmdp, 1); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!pmd_none(pmd)); >> + >> +    pmd = pmd_mkyoung(pmd); >> +    set_pmd_at(mm, vaddr, pmdp, pmd); >> +    pmdp_test_and_clear_young(vma, vaddr, pmdp); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(pmd_young(pmd)); >> +} >> + >> +static void __init pmd_leaf_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pmd_t pmd = pfn_pmd(pfn, prot); >> + >> +    /* >> +     * PMD based THP is a leaf entry. >> +     */ >> +    pmd = pmd_mkhuge(pmd); >> +    WARN_ON(!pmd_leaf(pmd)); >> +} >> + >> +static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot) >> +{ >> +    pmd_t pmd; >> + >> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP)) >> +        return; >> +    /* >> +     * X86 defined pmd_set_huge() verifies that the given >> +     * PMD is not a populated non-leaf entry. >> +     */ >> +    WRITE_ONCE(*pmdp, __pmd(0)); >> +    WARN_ON(!pmd_set_huge(pmdp, __pfn_to_phys(pfn), prot)); >> +    WARN_ON(!pmd_clear_huge(pmdp)); >> +    pmd = READ_ONCE(*pmdp); >> +    WARN_ON(!pmd_none(pmd)); >> +} >> + >> +static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pmd_t pmd = pfn_pmd(pfn, prot); >> + >> +    WARN_ON(!pmd_savedwrite(pmd_mk_savedwrite(pmd_clear_savedwrite(pmd)))); >> +    WARN_ON(pmd_savedwrite(pmd_clear_savedwrite(pmd_mk_savedwrite(pmd)))); >> +} >> + >>   #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD >>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) >>   { >> @@ -100,12 +233,115 @@ static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) >>        */ >>       WARN_ON(!pud_bad(pud_mkhuge(pud))); >>   } >> + >> +static void pud_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pud_t *pudp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) > > Align args properly > >> +{ >> +    pud_t pud = pfn_pud(pfn, prot); >> + >> +    if (!has_transparent_hugepage()) >> +        return; >> + >> +    /* Align the address wrt HPAGE_PUD_SIZE */ >> +    vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE; >> + >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_set_wrprotect(mm, vaddr, pudp); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(pud_write(pud)); >> + >> +#ifndef __PAGETABLE_PMD_FOLDED >> +    pud = pfn_pud(pfn, prot); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_huge_get_and_clear(mm, vaddr, pudp); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!pud_none(pud)); >> + >> +    pud = pfn_pud(pfn, prot); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_huge_get_and_clear_full(mm, vaddr, pudp, 1); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!pud_none(pud)); >> +#endif /* __PAGETABLE_PMD_FOLDED */ >> +    pud = pfn_pud(pfn, prot); >> +    pud = pud_wrprotect(pud); >> +    pud = pud_mkclean(pud); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pud = pud_mkwrite(pud); >> +    pud = pud_mkdirty(pud); >> +    pudp_set_access_flags(vma, vaddr, pudp, pud, 1); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!(pud_write(pud) && pud_dirty(pud))); >> + >> +    pud = pud_mkyoung(pud); >> +    set_pud_at(mm, vaddr, pudp, pud); >> +    pudp_test_and_clear_young(vma, vaddr, pudp); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(pud_young(pud)); >> +} >> + >> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) >> +{ >> +    pud_t pud = pfn_pud(pfn, prot); >> + >> +    /* >> +     * PUD based THP is a leaf entry. >> +     */ >> +    pud = pud_mkhuge(pud); >> +    WARN_ON(!pud_leaf(pud)); >> +} >> + >> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) >> +{ >> +    pud_t pud; >> + >> +    if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP)) >> +        return; >> +    /* >> +     * X86 defined pud_set_huge() verifies that the given >> +     * PUD is not a populated non-leaf entry. >> +     */ >> +    WRITE_ONCE(*pudp, __pud(0)); >> +    WARN_ON(!pud_set_huge(pudp, __pfn_to_phys(pfn), prot)); >> +    WARN_ON(!pud_clear_huge(pudp)); >> +    pud = READ_ONCE(*pudp); >> +    WARN_ON(!pud_none(pud)); >> +} >>   #else  /* !CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ >>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { } >> +static void pud_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pud_t *pudp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) > > Align args properly > >> +{ >> +} >> +static void __init pud_leaf_tests(unsigned long pfn, pgprot_t prot) { } >> +static void __init pud_huge_tests(pud_t *pudp, unsigned long pfn, pgprot_t prot) >> +{ >> +} >>   #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ >>   #else  /* !CONFIG_TRANSPARENT_HUGEPAGE */ >>   static void __init pmd_basic_tests(unsigned long pfn, pgprot_t prot) { } >>   static void __init pud_basic_tests(unsigned long pfn, pgprot_t prot) { } >> +static void __init pmd_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pmd_t *pmdp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) > > Align args properly > >> +{ >> +} >> +static void __init pud_advanced_tests(struct mm_struct *mm, >> +        struct vm_area_struct *vma, pud_t *pudp, >> +        unsigned long pfn, unsigned long vaddr, pgprot_t prot) > > Align args properly > Sure, will fix the arguments alignment in the above mentioned places.