Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp801693yba; Fri, 26 Apr 2019 09:00:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqx2yO+wL4jp2nqu8XqqXxahgK4X6KKg75wxX/eu3M1cxfZs8/1Nbz+4rIgyRyYObaK51/1u X-Received: by 2002:a62:4281:: with SMTP id h1mr18103437pfd.162.1556294404913; Fri, 26 Apr 2019 09:00:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556294404; cv=none; d=google.com; s=arc-20160816; b=TZSgIV369ZGyf7UGS3UNsfGNwx2nOnVBmPFAss3EEtmmczp8GxBhrlrsaV9AdqI2If Tig1WqhhhCme9cmo61iY1sOK6PEYWovoxxv0b15O4WE2kVvGgO0ocPulorSnIWWikYhC zPYHfgaI+ytVaEDvU06nC8zCXcI/WIbu7G7yCRnG4mINSq93uVsmtvyGmuPZr2CL0yjZ 0wrkq1E1cXTriS6N+H7uxAXmpmVV1GKnVHKuGU/xpUOFJq7Ar9hZ6LUh9e686D5jb2yU s8iSnEzByXZAwtlLjLxxNPzOA9xipNwQJHjyWw6acogei7gDYP6r2W7goChdRlMt8hZO +MUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:cc:to:subject:from:references :in-reply-to:message-id:dkim-signature; bh=gwSBkR8SNd8udGVI3QTStX+M6+oZhF2H+aToHhBfwcI=; b=lh3EcTNr708vjMgjEi4m9M7JICjfLstPFVeJr0kR0m0oaDhF4Xd007z+wV8hMdGQ4W 6afTxvuHm4l3X+EZ5t0k+HOxZW6EkM0NsBn4w+fgdJIt8R8dapdMl+gL/IheDd8Iz0Px Z85kk2iOrlFRtXXbLJEuGLWU1PgjxOYARza6+02gC909bF7sLh7jWceCdw6JFMqBBJJW exY7+lc1OOtBpD3m+/ApoAnATdT0SXR0rROGLRvKUiZVyShmihphaz2WQgsVGlrZQ5S2 uUiw2ydYEJmCYEnwvqH/hbdJ9RLROY4W/xeRkUk/OtfEU186Ey3sYllXLdSCvKUTJ2Ro +19w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@c-s.fr header.s=mail header.b="D+VYBx/i"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n1si12414688pge.484.2019.04.26.08.59.50; Fri, 26 Apr 2019 09:00:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@c-s.fr header.s=mail header.b="D+VYBx/i"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726869AbfDZP6d (ORCPT + 99 others); Fri, 26 Apr 2019 11:58:33 -0400 Received: from pegase1.c-s.fr ([93.17.236.30]:28003 "EHLO pegase1.c-s.fr" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726648AbfDZP6M (ORCPT ); Fri, 26 Apr 2019 11:58:12 -0400 Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 44rJcJ415sz9v0yT; Fri, 26 Apr 2019 17:58:08 +0200 (CEST) Authentication-Results: localhost; dkim=pass reason="1024-bit key; insecure key" header.d=c-s.fr header.i=@c-s.fr header.b=D+VYBx/i; dkim-adsp=pass; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id zFPbq5a55teq; Fri, 26 Apr 2019 17:58:08 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 44rJcJ2nffz9v0yN; Fri, 26 Apr 2019 17:58:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=c-s.fr; s=mail; t=1556294288; bh=gwSBkR8SNd8udGVI3QTStX+M6+oZhF2H+aToHhBfwcI=; h=In-Reply-To:References:From:Subject:To:Cc:Date:From; b=D+VYBx/i8J8lhJLh41BLspdDaJ+MwM4D7XDlaToE4h8pdB/oNMZ/x+Sa/iM8FwgZj KamxYd2mssxObl37VKTbXbz3DB03yY0/QAV0fNdKACPFeFFS/zyuqWIOi1waudIqLU 07TlUSfiu0fE6f/E6v50m62QB+l4OgnqR5jASeWE= Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 05BF58B94A; Fri, 26 Apr 2019 17:58:10 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id lhVBhYkWhRyT; Fri, 26 Apr 2019 17:58:09 +0200 (CEST) Received: from po16846vm.idsi0.si.c-s.fr (po15451.idsi0.si.c-s.fr [172.25.231.6]) by messagerie.si.c-s.fr (Postfix) with ESMTP id D69438B82F; Fri, 26 Apr 2019 17:58:09 +0200 (CEST) Received: by po16846vm.idsi0.si.c-s.fr (Postfix, from userid 0) id B563E666FE; Fri, 26 Apr 2019 15:58:09 +0000 (UTC) Message-Id: In-Reply-To: References: From: Christophe Leroy Subject: [PATCH v2 11/15] powerpc/mm: refactor definition of pgtable_cache[] To: Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , aneesh.kumar@linux.ibm.com Cc: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Date: Fri, 26 Apr 2019 15:58:09 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org pgtable_cache[] is the same for the 4 subarches, lets make it common. Reviewed-by: Aneesh Kumar K.V Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/book3s/32/pgalloc.h | 21 --------------------- arch/powerpc/include/asm/book3s/64/pgalloc.h | 22 ---------------------- arch/powerpc/include/asm/nohash/32/pgalloc.h | 21 --------------------- arch/powerpc/include/asm/nohash/64/pgalloc.h | 22 ---------------------- arch/powerpc/include/asm/pgalloc.h | 21 +++++++++++++++++++++ 5 files changed, 21 insertions(+), 86 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index 46422309d6e0..1b9b5c228230 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -5,26 +5,6 @@ #include #include -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - static inline pgd_t *pgd_alloc(struct mm_struct *mm) { return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), @@ -69,7 +49,6 @@ static inline void pgtable_free(void *table, unsigned index_size) } } -#define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) #ifdef CONFIG_SMP diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index cfd48d8cc055..df2dce6afe14 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -19,26 +19,6 @@ struct vmemmap_backing { }; extern struct vmemmap_backing *vmemmap_list; -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); extern void pmd_fragment_free(unsigned long *); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); @@ -199,8 +179,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, pgtable_free_tlb(tlb, table, PTE_INDEX); } -#define check_pgt_cache() do { } while (0) - extern atomic_long_t direct_pages_count[MMU_PAGE_COUNT]; static inline void update_page_count(int psize, long count) { diff --git a/arch/powerpc/include/asm/nohash/32/pgalloc.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h index e96ef2fde2ca..4615801aa953 100644 --- a/arch/powerpc/include/asm/nohash/32/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h @@ -5,26 +5,6 @@ #include #include -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - static inline pgd_t *pgd_alloc(struct mm_struct *mm) { return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), @@ -87,7 +67,6 @@ static inline void pgtable_free(void *table, unsigned index_size) } } -#define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) #ifdef CONFIG_SMP diff --git a/arch/powerpc/include/asm/nohash/64/pgalloc.h b/arch/powerpc/include/asm/nohash/64/pgalloc.h index 98de4f3b0306..ffc86d42816d 100644 --- a/arch/powerpc/include/asm/nohash/64/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h @@ -18,26 +18,6 @@ struct vmemmap_backing { }; extern struct vmemmap_backing *vmemmap_list; -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - static inline pgd_t *pgd_alloc(struct mm_struct *mm) { return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), @@ -140,6 +120,4 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, #define __pud_free_tlb(tlb, pud, addr) \ pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE) -#define check_pgt_cache() do { } while (0) - #endif /* _ASM_POWERPC_PGALLOC_64_H */ diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h index c2c6fd438840..5761bee0f004 100644 --- a/arch/powerpc/include/asm/pgalloc.h +++ b/arch/powerpc/include/asm/pgalloc.h @@ -45,6 +45,27 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) pte_fragment_free((unsigned long *)ptepage, 0); } +/* + * Functions that deal with pagetables that could be at any level of + * the table need to be passed an "index_size" so they know how to + * handle allocation. For PTE pages, the allocation size will be + * (2^index_size * sizeof(pointer)) and allocations are drawn from + * the kmem_cache in PGT_CACHE(index_size). + * + * The maximum index size needs to be big enough to allow any + * pagetable sizes we need, but small enough to fit in the low bits of + * any page table pointer. In other words all pagetables, even tiny + * ones, must be aligned to allow at least enough low 0 bits to + * contain this value. This value is also used as a mask, so it must + * be one less than a power of two. + */ +#define MAX_PGTABLE_INDEX_SIZE 0xf + +extern struct kmem_cache *pgtable_cache[]; +#define PGT_CACHE(shift) pgtable_cache[shift] + +static inline void check_pgt_cache(void) { } + #ifdef CONFIG_PPC_BOOK3S #include #else -- 2.13.3