Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp303667yba; Thu, 25 Apr 2019 23:47:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqxV+THQoCmInDQnbTtmJTikEyMAwn+cwkjcDN42e6p4GWpfzVDNGS7Ar1WsDFP4jJf6jL9z X-Received: by 2002:a17:902:4381:: with SMTP id j1mr43543714pld.173.1556261245779; Thu, 25 Apr 2019 23:47:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556261245; cv=none; d=google.com; s=arc-20160816; b=LmuWuEV94Fu+fgE6wmgo/hlLzQ48QFO8g+Ii4M6A6y0cWMnuTsWbouYFHFQNQ2SgJh ka6dUdQ7EWx4mcP7263dtsNhalw9KsUT7phHHp6jSTGncoYCTPUWBHuFmBi0uqr0RvT7 bdbNWEfqRSVr05qkq2gE0fg7IijhYzF9WDimx7s2MNX5HahdUJh0otJ8nKqAsieDwdV0 oql/K+Xt5rb8r7S+3YTiKXiY8R/ZOPXQvSdsI9q5IDP5PZIek+YMtuew/vN4t3BmjBlS Bgp32WqsOjKWMIEsUYgUEpgYErGggYvC4excnmEUqgH2Q7smC0Ip+Oz9xbKvUT5kB7j5 5kRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:mime-version:date:references :in-reply-to:subject:cc:to:from; bh=+CVvmU4CUBnP44nCowbLkScM1kCQmYBDZKVClQODi6s=; b=mQ2sMoY/3ZrxVpzsRRt3LPJup+2saRvDdI21e2wM+zWWIkxIU4pt5UqUnN+iF7Y0Nu ADRS9e01UUeKzEZYmXrLFVjdEpJ/Wl/MnimojXNEl+IsEA329qkqcHyGzqtfxYtJrH3L pnbWbB2lSToC/JnbrGDGEqiy6tXmZqxPDPf9yBq2zt5MGjj3GbWK4XRtOA/ez0Ls6XHU kfBGOg7vMTyF/sarpT5bXnEd9e9+YqT0LANr5EaqR9Ki1ip4DIeHhpBYv4A8T84SQl6H tZ9HqmDtl63GF2x4jUyIoBF/zD7Lgl19nVsGrEE2kNBpDZVUzDPAW8O2h8T7uUcqPKg9 giYA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y29si24034599pgk.120.2019.04.25.23.47.10; Thu, 25 Apr 2019 23:47:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726049AbfDZGpi (ORCPT + 99 others); Fri, 26 Apr 2019 02:45:38 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:55206 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725893AbfDZGpi (ORCPT ); Fri, 26 Apr 2019 02:45:38 -0400 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x3Q6dMXf004892 for ; Fri, 26 Apr 2019 02:45:37 -0400 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0b-001b2d01.pphosted.com with ESMTP id 2s3va1st4x-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 26 Apr 2019 02:45:36 -0400 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 26 Apr 2019 07:45:35 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Fri, 26 Apr 2019 07:45:32 +0100 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x3Q6jVxx33423400 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 26 Apr 2019 06:45:31 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A0C29A406D; Fri, 26 Apr 2019 06:45:31 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CAC96A4059; Fri, 26 Apr 2019 06:45:29 +0000 (GMT) Received: from skywalker.linux.ibm.com (unknown [9.193.89.254]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 26 Apr 2019 06:45:29 +0000 (GMT) X-Mailer: emacs 26.1 (via feedmail 11-beta-1 Q) From: "Aneesh Kumar K.V" To: Christophe Leroy , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 03/11] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct In-Reply-To: <633e11daecb440a8233890589a4025ed5003f222.1556202029.git.christophe.leroy@c-s.fr> References: <633e11daecb440a8233890589a4025ed5003f222.1556202029.git.christophe.leroy@c-s.fr> Date: Fri, 26 Apr 2019 12:04:40 +0530 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-GCONF: 00 x-cbid: 19042606-4275-0000-0000-0000032E6D15 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19042606-4276-0000-0000-0000383DBC43 Message-Id: <8736m5b7gf.fsf@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-04-26_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1904260048 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Christophe Leroy writes: > slice_mask_for_size() only uses mm->context, so hand directly a > pointer to the context. This will help moving the function in > subarch mmu.h in the next patch by avoiding having to include > the definition of struct mm_struct > Reviewed-by: Aneesh Kumar K.V > Signed-off-by: Christophe Leroy > --- > arch/powerpc/mm/slice.c | 34 +++++++++++++++++----------------- > 1 file changed, 17 insertions(+), 17 deletions(-) > > diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c > index 35b278082391..8eb7e8b09c75 100644 > --- a/arch/powerpc/mm/slice.c > +++ b/arch/powerpc/mm/slice.c > @@ -151,32 +151,32 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret, > } > > #ifdef CONFIG_PPC_BOOK3S_64 > -static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize) > +static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize) > { > #ifdef CONFIG_PPC_64K_PAGES > if (psize == MMU_PAGE_64K) > - return mm_ctx_slice_mask_64k(&mm->context); > + return mm_ctx_slice_mask_64k(&ctx); > #endif > if (psize == MMU_PAGE_4K) > - return mm_ctx_slice_mask_4k(&mm->context); > + return mm_ctx_slice_mask_4k(&ctx); > #ifdef CONFIG_HUGETLB_PAGE > if (psize == MMU_PAGE_16M) > - return mm_ctx_slice_mask_16m(&mm->context); > + return mm_ctx_slice_mask_16m(&ctx); > if (psize == MMU_PAGE_16G) > - return mm_ctx_slice_mask_16g(&mm->context); > + return mm_ctx_slice_mask_16g(&ctx); > #endif > BUG(); > } > #elif defined(CONFIG_PPC_8xx) > -static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize) > +static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize) > { > if (psize == mmu_virtual_psize) > - return &mm->context.mask_base_psize; > + return &ctx->mask_base_psize; > #ifdef CONFIG_HUGETLB_PAGE > if (psize == MMU_PAGE_512K) > - return &mm->context.mask_512k; > + return &ctx->mask_512k; > if (psize == MMU_PAGE_8M) > - return &mm->context.mask_8m; > + return &ctx->mask_8m; > #endif > BUG(); > } > @@ -246,7 +246,7 @@ static void slice_convert(struct mm_struct *mm, > slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize); > slice_print_mask(" mask", mask); > > - psize_mask = slice_mask_for_size(mm, psize); > + psize_mask = slice_mask_for_size(&mm->context, psize); > > /* We need to use a spinlock here to protect against > * concurrent 64k -> 4k demotion ... > @@ -263,7 +263,7 @@ static void slice_convert(struct mm_struct *mm, > > /* Update the slice_mask */ > old_psize = (lpsizes[index] >> (mask_index * 4)) & 0xf; > - old_mask = slice_mask_for_size(mm, old_psize); > + old_mask = slice_mask_for_size(&mm->context, old_psize); > old_mask->low_slices &= ~(1u << i); > psize_mask->low_slices |= 1u << i; > > @@ -282,7 +282,7 @@ static void slice_convert(struct mm_struct *mm, > > /* Update the slice_mask */ > old_psize = (hpsizes[index] >> (mask_index * 4)) & 0xf; > - old_mask = slice_mask_for_size(mm, old_psize); > + old_mask = slice_mask_for_size(&mm->context, old_psize); > __clear_bit(i, old_mask->high_slices); > __set_bit(i, psize_mask->high_slices); > > @@ -538,7 +538,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > /* First make up a "good" mask of slices that have the right size > * already > */ > - maskp = slice_mask_for_size(mm, psize); > + maskp = slice_mask_for_size(&mm->context, psize); > > /* > * Here "good" means slices that are already the right page size, > @@ -565,7 +565,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > * a pointer to good mask for the next code to use. > */ > if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) { > - compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K); > + compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K); > if (fixed) > slice_or_mask(&good_mask, maskp, compat_maskp); > else > @@ -760,7 +760,7 @@ void slice_init_new_context_exec(struct mm_struct *mm) > /* > * Slice mask cache starts zeroed, fill the default size cache. > */ > - mask = slice_mask_for_size(mm, psize); > + mask = slice_mask_for_size(&mm->context, psize); > mask->low_slices = ~0UL; > if (SLICE_NUM_HIGH) > bitmap_fill(mask->high_slices, SLICE_NUM_HIGH); > @@ -819,14 +819,14 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, > > VM_BUG_ON(radix_enabled()); > > - maskp = slice_mask_for_size(mm, psize); > + maskp = slice_mask_for_size(&mm->context, psize); > #ifdef CONFIG_PPC_64K_PAGES > /* We need to account for 4k slices too */ > if (psize == MMU_PAGE_64K) { > const struct slice_mask *compat_maskp; > struct slice_mask available; > > - compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K); > + compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K); > slice_or_mask(&available, maskp, compat_maskp); > return !slice_check_range_fits(mm, &available, addr, len); > } > -- > 2.13.3