Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp403054img; Wed, 20 Mar 2019 03:09:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqyChpPBroASG2O8ETJnkthS0rRc5nfS3MoCSqPpC94RO1xebu6wuBfZp++D4picjDUUZKvv X-Received: by 2002:a65:4348:: with SMTP id k8mr27586165pgq.289.1553076555506; Wed, 20 Mar 2019 03:09:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553076555; cv=none; d=google.com; s=arc-20160816; b=UYWGQhyOdB/pQqSDaNz2hFPbcqQQEa2x7NxhDTAKHDvXmwbcpTMPvbUL7KbkClHZHu phQGRKHZaXkec8d2FG7OI4MilGJQ33mEvdkosFx90g4xGMW6ek2IJ9vKaFk0dno782zT FmdieLNF3RgbFJj3fnBjCzZlZBIFJopov/96nd9Pjvbm9J0yStaxXzrZFbnw+qO1nCU4 yGo8bs9BH3H2vnsXsehteXMsoQYa+WYO7Fze/EHo1WtL9w8NrQTKTCsMG9u3dn9ClQwC PrIPbtY6yd2f7Ae8fASs9iQR9KRC2mNi1chy7WtIk6mjlGoYx87POPxv/B2qvVFHFMPr oFlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:date:cc:to:subject:from:references :in-reply-to:message-id:dkim-signature; bh=3wj14ovWj0bxI9kXPpQeF2rROgvA1DxngsyiMABrpko=; b=ubq1XCOaQZLRsqAog9gNhNhv7OdUsZTm9N6Vk+ozRjsTqxjS2opgY8uRKbAjkFes3S nHUn6C59fBhGfMoitRvU80uAqKwo1Htfzm5YDAfLDo70SnRaafABWsMaLWbXJviC9XXT lVxI65OwEyMN/+SrFGPOJvSASfj2IHLbZENF1Li7DZ1gXRvByl9o4O2CH+GzwXOO8sY2 jouwn/Lm/x4lXnsCso3CKjfG9Vg+1MH6m7a/XiYe+yDElFYzVMaVRO3Hecu9zgOAzn/t IYzNPDChaWN+7a9wb4vpbkwegdvQofumRXVR2MrU+salaOBbWz8lxxvOiSAJsSbB1rkR umWQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@c-s.fr header.s=mail header.b=c+uFbXNj; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t19si1291308pgu.272.2019.03.20.03.08.59; Wed, 20 Mar 2019 03:09:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@c-s.fr header.s=mail header.b=c+uFbXNj; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727977AbfCTKHr (ORCPT + 99 others); Wed, 20 Mar 2019 06:07:47 -0400 Received: from pegase1.c-s.fr ([93.17.236.30]:18103 "EHLO pegase1.c-s.fr" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727828AbfCTKHA (ORCPT ); Wed, 20 Mar 2019 06:07:00 -0400 Received: from localhost (mailhub1-int [192.168.12.234]) by localhost (Postfix) with ESMTP id 44PQZ86zn1z9vC0D; Wed, 20 Mar 2019 11:06:56 +0100 (CET) Authentication-Results: localhost; dkim=pass reason="1024-bit key; insecure key" header.d=c-s.fr header.i=@c-s.fr header.b=c+uFbXNj; dkim-adsp=pass; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at c-s.fr Received: from pegase1.c-s.fr ([192.168.12.234]) by localhost (pegase1.c-s.fr [192.168.12.234]) (amavisd-new, port 10024) with ESMTP id umyeJqd1wBQu; Wed, 20 Mar 2019 11:06:56 +0100 (CET) Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192]) by pegase1.c-s.fr (Postfix) with ESMTP id 44PQZ85jDyz9vByx; Wed, 20 Mar 2019 11:06:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=c-s.fr; s=mail; t=1553076416; bh=3wj14ovWj0bxI9kXPpQeF2rROgvA1DxngsyiMABrpko=; h=In-Reply-To:References:From:Subject:To:Cc:Date:From; b=c+uFbXNjsDcEU+OdYuvAFRkiK+Jp1yxtgIRv9gD6FiOAIK6jNWLRhqu4EJADZgv6t SGPfx4MM9P9uH1wCEzTrFXWpUvWTTLJn4GC3RN8VFJKG/NJmqwhbi0VKV98LHAfxk4 dBJaQSZU0xe9wo6HAHxiXRM/88ODs6HghBL/HdcI= Received: from localhost (localhost [127.0.0.1]) by messagerie.si.c-s.fr (Postfix) with ESMTP id E49F08B918; Wed, 20 Mar 2019 11:06:57 +0100 (CET) X-Virus-Scanned: amavisd-new at c-s.fr Received: from messagerie.si.c-s.fr ([127.0.0.1]) by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023) with ESMTP id 2IsWvKPiThur; Wed, 20 Mar 2019 11:06:57 +0100 (CET) Received: from po16846vm.idsi0.si.c-s.fr (unknown [172.25.231.2]) by messagerie.si.c-s.fr (Postfix) with ESMTP id C1AB48B911; Wed, 20 Mar 2019 11:06:57 +0100 (CET) Received: by po16846vm.idsi0.si.c-s.fr (Postfix, from userid 0) id 95B6363AEF; Wed, 20 Mar 2019 10:06:57 +0000 (UTC) Message-Id: In-Reply-To: References: From: Christophe Leroy Subject: [PATCH v1 21/27] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct To: Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , aneesh.kumar@linux.ibm.com Cc: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Date: Wed, 20 Mar 2019 10:06:57 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org slice_mask_for_size() only uses mm->context, so hand directly a pointer to the context. This will help moving the function in subarch mmu.h in the next patch by avoiding having to include the definition of struct mm_struct Signed-off-by: Christophe Leroy --- arch/powerpc/mm/slice.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c index f98b9e812c62..231fd88d97e2 100644 --- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -127,33 +127,33 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret, } #ifdef CONFIG_PPC_BOOK3S_64 -static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize) +static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize) { #ifdef CONFIG_PPC_64K_PAGES if (psize == MMU_PAGE_64K) - return &mm->context.mask_64k; + return &ctx->mask_64k; #endif if (psize == MMU_PAGE_4K) - return &mm->context.mask_4k; + return &ctx->mask_4k; #ifdef CONFIG_HUGETLB_PAGE if (psize == MMU_PAGE_16M) - return &mm->context.mask_16m; + return &ctx->mask_16m; if (psize == MMU_PAGE_16G) - return &mm->context.mask_16g; + return &ctx->mask_16g; #endif WARN_ON(true); return NULL; } #elif defined(CONFIG_PPC_8xx) -static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize) +static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize) { if (psize == mmu_virtual_psize) - return &mm->context.mask_base_psize; + return &ctx->mask_base_psize; #ifdef CONFIG_HUGETLB_PAGE if (psize == MMU_PAGE_512K) - return &mm->context.mask_512k; + return &ctx->mask_512k; if (psize == MMU_PAGE_8M) - return &mm->context.mask_8m; + return &ctx->mask_8m; #endif WARN_ON(true); return NULL; @@ -221,7 +221,7 @@ static void slice_convert(struct mm_struct *mm, unsigned long i, flags; int old_psize; - psize_mask = slice_mask_for_size(mm, psize); + psize_mask = slice_mask_for_size(&mm->context, psize); /* We need to use a spinlock here to protect against * concurrent 64k -> 4k demotion ... @@ -238,7 +238,7 @@ static void slice_convert(struct mm_struct *mm, /* Update the slice_mask */ old_psize = (lpsizes[index] >> (mask_index * 4)) & 0xf; - old_mask = slice_mask_for_size(mm, old_psize); + old_mask = slice_mask_for_size(&mm->context, old_psize); old_mask->low_slices &= ~(1u << i); psize_mask->low_slices |= 1u << i; @@ -257,7 +257,7 @@ static void slice_convert(struct mm_struct *mm, /* Update the slice_mask */ old_psize = (hpsizes[index] >> (mask_index * 4)) & 0xf; - old_mask = slice_mask_for_size(mm, old_psize); + old_mask = slice_mask_for_size(&mm->context, old_psize); __clear_bit(i, old_mask->high_slices); __set_bit(i, psize_mask->high_slices); @@ -504,7 +504,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, /* First make up a "good" mask of slices that have the right size * already */ - maskp = slice_mask_for_size(mm, psize); + maskp = slice_mask_for_size(&mm->context, psize); /* * Here "good" means slices that are already the right page size, @@ -531,7 +531,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, * a pointer to good mask for the next code to use. */ if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) { - compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K); + compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K); if (fixed) slice_or_mask(&good_mask, maskp, compat_maskp); else @@ -709,7 +709,7 @@ void slice_init_new_context_exec(struct mm_struct *mm) /* * Slice mask cache starts zeroed, fill the default size cache. */ - mask = slice_mask_for_size(mm, psize); + mask = slice_mask_for_size(&mm->context, psize); mask->low_slices = ~0UL; if (SLICE_NUM_HIGH) bitmap_fill(mask->high_slices, SLICE_NUM_HIGH); @@ -766,14 +766,14 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, VM_BUG_ON(radix_enabled()); - maskp = slice_mask_for_size(mm, psize); + maskp = slice_mask_for_size(&mm->context, psize); #ifdef CONFIG_PPC_64K_PAGES /* We need to account for 4k slices too */ if (psize == MMU_PAGE_64K) { const struct slice_mask *compat_maskp; struct slice_mask available; - compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K); + compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K); slice_or_mask(&available, maskp, compat_maskp); return !slice_check_range_fits(mm, &available, addr, len); } -- 2.13.3