Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp505407yba; Wed, 3 Apr 2019 13:08:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqxIX7sTf3K3CQ5p5gQ9bsoBvNo0zssrAcMYzLkf5cXyL6aIP4kJDn2bK/cRtevjeulUqJeF X-Received: by 2002:a17:902:1003:: with SMTP id b3mr1932224pla.306.1554322082531; Wed, 03 Apr 2019 13:08:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554322082; cv=none; d=google.com; s=arc-20160816; b=qEhFPDgSBfczes7AqS4YvKVS7pq8/qugaB4WUia8WdL7ZnrNg5bCRNQmFXayzO0zxE lROFLklfai4KGuY/losyNFPUPM5uLu9gYmanoloHcWPXLSveCjbLmutmbpVvQ02i4qw4 HiRffVh3nrTRPqUPFNt/4UPRIzYDUYUH6RFSnNoAxuZQkY22Zk1CZA/a9B2APZmTRrBa mJ0+BmPAEJJdvxFnln/R0BgMpNT2/bAIIrii+tLy0nB6/rPQG5mv2l5GuvCHvCdmfWXz UAQ1Zs0JA9zyP6wnLU8f2zGv5yBptwoWMXUXXExn3yFETYxRlSeBkYLO/soqwS16gwpi 5W8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:cc:to:subject:from:references :in-reply-to:message-id:dkim-signature; bh=9PzpjPJ3E98/9o7prOPgjUs3jBAUoz6ZJUfKOtAoWfU=; b=TzGtRcVJejcS3nne0L36ePpZsychGHpBVr96haIiwQNcyTyPQlYh1J53cq+Y34kkGm qeu//lK2bqSu1UYg0LPNSyl113dfWVcZGU+WJHl+ftOM4eVtkVNfpuwKxBbBepD1gywl 3Aem485/f5jmzZYZX5UnSacknfyoeDblRngSRuYpaGYhtNEDoRSSegXVyYsjVU0LgiR9 WfJ28ielM6vQhCNiPSG06dTfvdEShJhG9aPu1kWn/52/SRPSMSDUfJika3Ox/HQKUlJ7 D9AZ0SAYuYtggg4S3y8ANrsWsBc0Y6+WThCyPlCfP9qLrcCxh+cf5yh4TjOut23xF8ME QeuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@c-s.fr header.s=mail header.b=ooeBZpQA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d11si3696419pfn.171.2019.04.03.13.07.47; Wed, 03 Apr 2019 13:08:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@c-s.fr header.s=mail header.b=ooeBZpQA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727491AbfDCUGt (ORCPT + 99 others); Wed, 3 Apr 2019 16:06:49 -0400 Received: from pegase1.c-s.fr ([93.17.236.30]:60056 "EHLO pegase1.c-s.fr" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726892AbfDCUGZ (ORCPT ); Wed, 3 Apr 2019 16:06:25 -0400 Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 44ZHCM6PLxz9v3px; Wed, 3 Apr 2019 22:06:23 +0200 (CEST) Authentication-Results: localhost; dkim=pass reason="1024-bit key; insecure key" header.d=c-s.fr header.i=@c-s.fr header.b=ooeBZpQA; dkim-adsp=pass; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id o1z1JUINTBUE; Wed, 3 Apr 2019 22:06:23 +0200 (CEST) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 44ZHCM5Gvcz9v3pp; Wed, 3 Apr 2019 22:06:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=c-s.fr; s=mail; t=1554321983; bh=9PzpjPJ3E98/9o7prOPgjUs3jBAUoz6ZJUfKOtAoWfU=; h=In-Reply-To:References:From:Subject:To:Cc:Date:From; b=ooeBZpQA+lRyPGJRhMJlsw02rRnqDD5nCpA7UnC5jHLNEdHSJf/XUHGaiJxaCGlTy XzLAzq7I+9UY7HtG0yRjp8JV/9gYkaaVkT3rs5mw0Xe7fJ/lRNI2wio/WMrFAhcH7N pl2sa+HDLNdoK5898Tl5E3a5eAUDAz9/QKvuP2w4= Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id D78658B905; Wed, 3 Apr 2019 22:06:23 +0200 (CEST) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id BibrNEzKk8pq; Wed, 3 Apr 2019 22:06:23 +0200 (CEST) Received: from po16846vm.idsi0.si.c-s.fr (unknown [192.168.4.90]) by messagerie.si.c-s.fr (Postfix) with ESMTP id 9BE488B902; Wed, 3 Apr 2019 22:06:23 +0200 (CEST) Received: by po16846vm.idsi0.si.c-s.fr (Postfix, from userid 0) id 7E190655EC; Wed, 3 Apr 2019 20:06:23 +0000 (UTC) Message-Id: <61fab146987f6197419848837b0e842431deba9f.1554321743.git.christophe.leroy@c-s.fr> In-Reply-To: References: From: Christophe Leroy Subject: [PATCH v1 11/15] powerpc/mm: refactor definition of pgtable_cache[] To: Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , aneesh.kumar@linux.ibm.com Cc: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Date: Wed, 3 Apr 2019 20:06:23 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org pgtable_cache[] is the same for the 4 subarches, lets make it common. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/book3s/32/pgalloc.h | 21 --------------------- arch/powerpc/include/asm/book3s/64/pgalloc.h | 22 ---------------------- arch/powerpc/include/asm/nohash/32/pgalloc.h | 21 --------------------- arch/powerpc/include/asm/nohash/64/pgalloc.h | 22 ---------------------- arch/powerpc/include/asm/pgalloc.h | 21 +++++++++++++++++++++ 5 files changed, 21 insertions(+), 86 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index 46422309d6e0..1b9b5c228230 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -5,26 +5,6 @@ #include #include -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - static inline pgd_t *pgd_alloc(struct mm_struct *mm) { return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), @@ -69,7 +49,6 @@ static inline void pgtable_free(void *table, unsigned index_size) } } -#define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) #ifdef CONFIG_SMP diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index cfd48d8cc055..df2dce6afe14 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -19,26 +19,6 @@ struct vmemmap_backing { }; extern struct vmemmap_backing *vmemmap_list; -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); extern void pmd_fragment_free(unsigned long *); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); @@ -199,8 +179,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, pgtable_free_tlb(tlb, table, PTE_INDEX); } -#define check_pgt_cache() do { } while (0) - extern atomic_long_t direct_pages_count[MMU_PAGE_COUNT]; static inline void update_page_count(int psize, long count) { diff --git a/arch/powerpc/include/asm/nohash/32/pgalloc.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h index e96ef2fde2ca..4615801aa953 100644 --- a/arch/powerpc/include/asm/nohash/32/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h @@ -5,26 +5,6 @@ #include #include -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - static inline pgd_t *pgd_alloc(struct mm_struct *mm) { return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), @@ -87,7 +67,6 @@ static inline void pgtable_free(void *table, unsigned index_size) } } -#define check_pgt_cache() do { } while (0) #define get_hugepd_cache_index(x) (x) #ifdef CONFIG_SMP diff --git a/arch/powerpc/include/asm/nohash/64/pgalloc.h b/arch/powerpc/include/asm/nohash/64/pgalloc.h index 98de4f3b0306..ffc86d42816d 100644 --- a/arch/powerpc/include/asm/nohash/64/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h @@ -18,26 +18,6 @@ struct vmemmap_backing { }; extern struct vmemmap_backing *vmemmap_list; -/* - * Functions that deal with pagetables that could be at any level of - * the table need to be passed an "index_size" so they know how to - * handle allocation. For PTE pages (which are linked to a struct - * page for now, and drawn from the main get_free_pages() pool), the - * allocation size will be (2^index_size * sizeof(pointer)) and - * allocations are drawn from the kmem_cache in PGT_CACHE(index_size). - * - * The maximum index size needs to be big enough to allow any - * pagetable sizes we need, but small enough to fit in the low bits of - * any page table pointer. In other words all pagetables, even tiny - * ones, must be aligned to allow at least enough low 0 bits to - * contain this value. This value is also used as a mask, so it must - * be one less than a power of two. - */ -#define MAX_PGTABLE_INDEX_SIZE 0xf - -extern struct kmem_cache *pgtable_cache[]; -#define PGT_CACHE(shift) pgtable_cache[shift] - static inline pgd_t *pgd_alloc(struct mm_struct *mm) { return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), @@ -140,6 +120,4 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, #define __pud_free_tlb(tlb, pud, addr) \ pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE) -#define check_pgt_cache() do { } while (0) - #endif /* _ASM_POWERPC_PGALLOC_64_H */ diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h index c2c6fd438840..5761bee0f004 100644 --- a/arch/powerpc/include/asm/pgalloc.h +++ b/arch/powerpc/include/asm/pgalloc.h @@ -45,6 +45,27 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) pte_fragment_free((unsigned long *)ptepage, 0); } +/* + * Functions that deal with pagetables that could be at any level of + * the table need to be passed an "index_size" so they know how to + * handle allocation. For PTE pages, the allocation size will be + * (2^index_size * sizeof(pointer)) and allocations are drawn from + * the kmem_cache in PGT_CACHE(index_size). + * + * The maximum index size needs to be big enough to allow any + * pagetable sizes we need, but small enough to fit in the low bits of + * any page table pointer. In other words all pagetables, even tiny + * ones, must be aligned to allow at least enough low 0 bits to + * contain this value. This value is also used as a mask, so it must + * be one less than a power of two. + */ +#define MAX_PGTABLE_INDEX_SIZE 0xf + +extern struct kmem_cache *pgtable_cache[]; +#define PGT_CACHE(shift) pgtable_cache[shift] + +static inline void check_pgt_cache(void) { } + #ifdef CONFIG_PPC_BOOK3S #include #else -- 2.13.3