Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp1043632imu; Wed, 28 Nov 2018 03:49:26 -0800 (PST) X-Google-Smtp-Source: AFSGD/XwC9BYM9Bw9VM6DfMeS0SJBIaixKF04+Q665RWT6BHX6OoVhTNRuaqfhzEcwjYTdFKtDpA X-Received: by 2002:a17:902:6b87:: with SMTP id p7-v6mr37071869plk.282.1543405766479; Wed, 28 Nov 2018 03:49:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543405766; cv=none; d=google.com; s=arc-20160816; b=sOMAwi5KU/ydk0NAb+v+Hny4bBPwzFo9ziCNLiqQGeIGdqCnS+RuCh0LhPQiuJ0ljp VxCBgOATcYNdR5/n9kv1DKefdrcPGTx2KJftRdeJc3CKNQM6aokGWDchYW6L7tT+l38R Cgfqz43EuswsvUUWS9nyL94l1481GDY2DpDN/SQsAyCbpsC0G7sUR0jgu7P1jakUM7aB yfAMvApboCNUt+33lBol0rULXofLtes/fq7zCVRlwp4XOGcEYYRSe6tWVQyoFSX/SY7L M6+XXWj7jOhDY6T1NMarN4UwwsYx9+U3ivLxGkbQdGCj6hcImzC+tWCvSUKAxJVGB3fn 9ZXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:cc:to:subject:from:references :in-reply-to:message-id; bh=hXdfCjBoZVAlCBgrFAIlGUFL3oaLaUcuOylCp6ogXXQ=; b=0l9LgA7UbUsbv9Rhi9wWPqGzTdvmHweapmugCCIVOfdWGMZrQl5zbG3xSKu36JfSTY PEcQolq7gRUpqt7E9ft352GeKiHJkb3ll7JjIy+wFI89jSpr4/7mzbgXySc0Ab9pSFXI FjgJNERZg3ERGkZ1/RsIGnHtyBlnn4eFnG9wPLgiIgb22jlTQqiY32CwB33Wttf2elLt xjrVeW7qP61H0mUpGUQcv+TSwzOWjr9vMEHMAklcsKFd/iHvWywEw6dNT/fPo2riNFSS TubNDXLCoub1+B+eEZaolZrSEBGh8EWTdGpb/3l0ixXHHef/NkPkKsRBOnpgKZBUMzX8 37eg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w9si6836866pgg.72.2018.11.28.03.49.11; Wed, 28 Nov 2018 03:49:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728696AbeK1Wrw (ORCPT + 99 others); Wed, 28 Nov 2018 17:47:52 -0500 Received: from pegase1.c-s.fr ([93.17.236.30]:9293 "EHLO pegase1.c-s.fr" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728669AbeK1Wrv (ORCPT ); Wed, 28 Nov 2018 17:47:51 -0500 Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 434f4g07xsz9v0B0; Wed, 28 Nov 2018 12:46:27 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id IGOtguIEtwTn; Wed, 28 Nov 2018 12:46:26 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 434f4f6fJ6z9v09s; Wed, 28 Nov 2018 12:46:26 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 002D08B869; Wed, 28 Nov 2018 12:46:27 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id d5-RaRV4Dqru; Wed, 28 Nov 2018 12:46:27 +0100 (CET) Received: from po14163vm.idsi0.si.c-s.fr (po15451.idsi0.si.c-s.fr [172.25.231.2]) by messagerie.si.c-s.fr (Postfix) with ESMTP id CAA038B853; Wed, 28 Nov 2018 12:46:27 +0100 (CET) Received: by po14163vm.idsi0.si.c-s.fr (Postfix, from userid 0) id B629769B1B; Wed, 28 Nov 2018 11:46:27 +0000 (UTC) Message-Id: <94c1e9b95e73598c2de5a97a43ddce6a2cedf515.1543405086.git.christophe.leroy@c-s.fr> In-Reply-To: References: From: Christophe Leroy Subject: [PATCH v7 03/16] powerpc/mm: Move pte_fragment_alloc() to a common location To: Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman Cc: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Date: Wed, 28 Nov 2018 11:46:27 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In preparation of next patch which generalises the use of pte_fragment_alloc() for all, this patch moves the related functions in a place that is common to all subarches. The 8xx will need that for supporting 16k pages, as in that mode page tables still have a size of 4k. Since pte_fragment with only once fragment is not different from what is done in the general case, we can easily migrate all subarchs to pte fragments. Reviewed-by: Aneesh Kumar K.V Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/book3s/64/pgalloc.h | 1 + arch/powerpc/mm/Makefile | 4 +- arch/powerpc/mm/mmu_context_book3s64.c | 15 ---- arch/powerpc/mm/pgtable-book3s64.c | 85 -------------------- arch/powerpc/mm/pgtable-frag.c | 116 +++++++++++++++++++++++++++ 5 files changed, 120 insertions(+), 101 deletions(-) create mode 100644 arch/powerpc/mm/pgtable-frag.c diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index bfed4cf3b2f3..6c2808c0f052 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -39,6 +39,7 @@ extern struct vmemmap_backing *vmemmap_list; extern struct kmem_cache *pgtable_cache[]; #define PGT_CACHE(shift) pgtable_cache[shift] +void pte_frag_destroy(void *pte_frag); extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int); extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); extern void pte_fragment_free(unsigned long *, int); diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile index ca96e7be4d0e..3cbb1acf0745 100644 --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -15,7 +15,9 @@ obj-$(CONFIG_PPC_MMU_NOHASH) += mmu_context_nohash.o tlb_nohash.o \ obj-$(CONFIG_PPC_BOOK3E) += tlb_low_$(BITS)e.o hash64-$(CONFIG_PPC_NATIVE) := hash_native_64.o obj-$(CONFIG_PPC_BOOK3E_64) += pgtable-book3e.o -obj-$(CONFIG_PPC_BOOK3S_64) += pgtable-hash64.o hash_utils_64.o slb.o $(hash64-y) mmu_context_book3s64.o pgtable-book3s64.o +obj-$(CONFIG_PPC_BOOK3S_64) += pgtable-hash64.o hash_utils_64.o slb.o \ + $(hash64-y) mmu_context_book3s64.o \ + pgtable-book3s64.o pgtable-frag.o obj-$(CONFIG_PPC_RADIX_MMU) += pgtable-radix.o tlb-radix.o obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o hash_low_32.o mmu_context_hash32.o obj-$(CONFIG_PPC_STD_MMU) += tlb_hash$(BITS).o diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c index 510f103d7813..f720c5cc0b5e 100644 --- a/arch/powerpc/mm/mmu_context_book3s64.c +++ b/arch/powerpc/mm/mmu_context_book3s64.c @@ -164,21 +164,6 @@ static void destroy_contexts(mm_context_t *ctx) } } -static void pte_frag_destroy(void *pte_frag) -{ - int count; - struct page *page; - - page = virt_to_page(pte_frag); - /* drop all the pending references */ - count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; - /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_page_dtor(page); - __free_page(page); - } -} - static void pmd_frag_destroy(void *pmd_frag) { int count; diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c index 9f93c9f985c5..0c0fd173208a 100644 --- a/arch/powerpc/mm/pgtable-book3s64.c +++ b/arch/powerpc/mm/pgtable-book3s64.c @@ -322,91 +322,6 @@ void pmd_fragment_free(unsigned long *pmd) } } -static pte_t *get_pte_from_cache(struct mm_struct *mm) -{ - void *pte_frag, *ret; - - spin_lock(&mm->page_table_lock); - ret = mm->context.pte_frag; - if (ret) { - pte_frag = ret + PTE_FRAG_SIZE; - /* - * If we have taken up all the fragments mark PTE page NULL - */ - if (((unsigned long)pte_frag & ~PAGE_MASK) == 0) - pte_frag = NULL; - mm->context.pte_frag = pte_frag; - } - spin_unlock(&mm->page_table_lock); - return (pte_t *)ret; -} - -static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) -{ - void *ret = NULL; - struct page *page; - - if (!kernel) { - page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); - if (!page) - return NULL; - if (!pgtable_page_ctor(page)) { - __free_page(page); - return NULL; - } - } else { - page = alloc_page(PGALLOC_GFP); - if (!page) - return NULL; - } - - atomic_set(&page->pt_frag_refcount, 1); - - ret = page_address(page); - /* - * if we support only one fragment just return the - * allocated page. - */ - if (PTE_FRAG_NR == 1) - return ret; - spin_lock(&mm->page_table_lock); - /* - * If we find pgtable_page set, we return - * the allocated page with single fragement - * count. - */ - if (likely(!mm->context.pte_frag)) { - atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR); - mm->context.pte_frag = ret + PTE_FRAG_SIZE; - } - spin_unlock(&mm->page_table_lock); - - return (pte_t *)ret; -} - -pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel) -{ - pte_t *pte; - - pte = get_pte_from_cache(mm); - if (pte) - return pte; - - return __alloc_for_ptecache(mm, kernel); -} - -void pte_fragment_free(unsigned long *table, int kernel) -{ - struct page *page = virt_to_page(table); - - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { - if (!kernel) - pgtable_page_dtor(page); - __free_page(page); - } -} - static inline void pgtable_free(void *table, int index) { switch (index) { diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c new file mode 100644 index 000000000000..d61e7c2a9a79 --- /dev/null +++ b/arch/powerpc/mm/pgtable-frag.c @@ -0,0 +1,116 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Handling Page Tables through page fragments + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +void pte_frag_destroy(void *pte_frag) +{ + int count; + struct page *page; + + page = virt_to_page(pte_frag); + /* drop all the pending references */ + count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; + /* We allow PTE_FRAG_NR fragments from a PTE page */ + if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) { + pgtable_page_dtor(page); + __free_page(page); + } +} + +static pte_t *get_pte_from_cache(struct mm_struct *mm) +{ + void *pte_frag, *ret; + + spin_lock(&mm->page_table_lock); + ret = mm->context.pte_frag; + if (ret) { + pte_frag = ret + PTE_FRAG_SIZE; + /* + * If we have taken up all the fragments mark PTE page NULL + */ + if (((unsigned long)pte_frag & ~PAGE_MASK) == 0) + pte_frag = NULL; + mm->context.pte_frag = pte_frag; + } + spin_unlock(&mm->page_table_lock); + return (pte_t *)ret; +} + +static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) +{ + void *ret = NULL; + struct page *page; + + if (!kernel) { + page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); + if (!page) + return NULL; + if (!pgtable_page_ctor(page)) { + __free_page(page); + return NULL; + } + } else { + page = alloc_page(PGALLOC_GFP); + if (!page) + return NULL; + } + + atomic_set(&page->pt_frag_refcount, 1); + + ret = page_address(page); + /* + * if we support only one fragment just return the + * allocated page. + */ + if (PTE_FRAG_NR == 1) + return ret; + spin_lock(&mm->page_table_lock); + /* + * If we find pgtable_page set, we return + * the allocated page with single fragement + * count. + */ + if (likely(!mm->context.pte_frag)) { + atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR); + mm->context.pte_frag = ret + PTE_FRAG_SIZE; + } + spin_unlock(&mm->page_table_lock); + + return (pte_t *)ret; +} + +pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel) +{ + pte_t *pte; + + pte = get_pte_from_cache(mm); + if (pte) + return pte; + + return __alloc_for_ptecache(mm, kernel); +} + +void pte_fragment_free(unsigned long *table, int kernel) +{ + struct page *page = virt_to_page(table); + + BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&page->pt_frag_refcount)) { + if (!kernel) + pgtable_page_dtor(page); + __free_page(page); + } +} -- 2.13.3