Received: by 10.223.185.116 with SMTP id b49csp4645763wrg; Mon, 26 Feb 2018 23:31:01 -0800 (PST) X-Google-Smtp-Source: AH8x2244m8CY0r9QYoOMUpagQUXR2iLI6KB5XYYtJx5ETpA7RrtQ8SpCIMINQ+vKNA4MxyHrFgdV X-Received: by 10.98.160.142 with SMTP id p14mr13389283pfl.134.1519716661763; Mon, 26 Feb 2018 23:31:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519716661; cv=none; d=google.com; s=arc-20160816; b=BkyJL+Mne+8Mg00DFaFjoB30q0aDNpDMdPhiwN3ghW6SsdWwVTHU7utr1vlKWBmUkF kUPZJaRmqtG0XcOYc2M/o894ro5faaYBaw1zBEinqkV6QtTUX3xL2BhsoHkK24k5e2i9 yOpG8EAIhI8brf6A6xSTIUFNinRQiTezab3wDnHyfQSU7ooZLsdHqHvTVcj78oCstSeo aJlBbWVZsUz6IxQq7c9EqH5y3TbBacq6+A+DF9FFSwJ61VIKD6avahA3hwPGQi8gwHeL 4bEHYGON0z5q/0/eaxKRpq6WDspuYu4DP3jqPdgx7JfwSZM44I9iTTeeriOhL8DOP76O 4sCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:mime-version:date:references :in-reply-to:subject:cc:to:from:arc-authentication-results; bh=7eIiyJyQgs4ZgcaM1kSuBVYuy6XzDYpsOvvROnLnEjs=; b=dAUEy1oN/iHDzPfMLPpieNPhp9YvCbIIdJ/CTKLrMovevwlWFKBYc3C1rq3mH+IPpk mXfvub00mGaj4biRcw6Eu2zQSgKhEnFpcNasKnBgttvJRKra4UYirzfqLYnixuVZK+lH +Pd4A/wtwE7RxvHRb+GA4ChojFjDXKZZY9nU47htHgYOxaGH2SI3TMyTL81WWhFRkDgE A7u275QNAwpINjRPWr6PSNa+iUDrMWGsQaTD12v45UwexjPrAM9u4UCfbfu9//+h5Z8z WyYLZdPuvCzVhHdSZX3qjoKFGGSC2GTYduN5txNAdgvalTNNsawJ8RwC5IxhuSmoship 4l4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f13si6649870pgp.463.2018.02.26.23.30.45; Mon, 26 Feb 2018 23:31:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752311AbeB0HaF (ORCPT + 99 others); Tue, 27 Feb 2018 02:30:05 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:47256 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752023AbeB0HaB (ORCPT ); Tue, 27 Feb 2018 02:30:01 -0500 Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w1R7TAdU096684 for ; Tue, 27 Feb 2018 02:30:00 -0500 Received: from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 2gd1yatat3-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 27 Feb 2018 02:30:00 -0500 Received: from localhost by e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 27 Feb 2018 07:29:57 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 27 Feb 2018 07:29:55 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w1R7Ttlh49545470; Tue, 27 Feb 2018 07:29:55 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 70C7A4C046; Tue, 27 Feb 2018 07:23:25 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 97A0F4C044; Tue, 27 Feb 2018 07:23:24 +0000 (GMT) Received: from skywalker (unknown [9.124.35.86]) by d06av22.portsmouth.uk.ibm.com (Postfix) with SMTP; Tue, 27 Feb 2018 07:23:24 +0000 (GMT) Received: (nullmailer pid 5352 invoked by uid 1000); Tue, 27 Feb 2018 07:29:53 -0000 From: "Aneesh Kumar K.V" To: Christophe Leroy , Nicholas Piggin Cc: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: Re: [RFC REBASED 4/5] powerpc/mm/slice: Use const pointers to cached slice masks where possible In-Reply-To: <6a8c9183257dfcfeb1d8ed1ecc778ec3da19dcd4.1518382747.git.christophe.leroy@c-s.fr> References: <02a62db83282b5ef3e0e8281fdc46fa91beffc86.1518382747.git.christophe.leroy@c-s.fr> <6a8c9183257dfcfeb1d8ed1ecc778ec3da19dcd4.1518382747.git.christophe.leroy@c-s.fr> Date: Tue, 27 Feb 2018 12:59:53 +0530 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-GCONF: 00 x-cbid: 18022707-0008-0000-0000-000004D4EBCF X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18022707-0009-0000-0000-00001E6809BB Message-Id: <87h8q27uvi.fsf@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-02-27_04:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=5 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1802270088 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Christophe Leroy writes: > The slice_mask cache was a basic conversion which copied the slice > mask into caller's structures, because that's how the original code > worked. In most cases the pointer can be used directly instead, saving > a copy and an on-stack structure. > > This also converts the slice_mask bit operation helpers to be the usual > 3-operand kind, which is clearer to work with. And we remove some > unnecessary intermediate bitmaps, reducing stack and copy overhead > further. Can we move the reduce unncessary intermediate bitmaps as another patch? > > Signed-off-by: Nicholas Piggin > Signed-off-by: Christophe Leroy > --- > arch/powerpc/include/asm/book3s/64/slice.h | 7 +++ > arch/powerpc/include/asm/nohash/32/slice.h | 6 +++ > arch/powerpc/mm/slice.c | 77 ++++++++++++++++++------------ > 3 files changed, 59 insertions(+), 31 deletions(-) > > diff --git a/arch/powerpc/include/asm/book3s/64/slice.h b/arch/powerpc/include/asm/book3s/64/slice.h > index f9a2c8bd7a77..be1ce8e91ad1 100644 > --- a/arch/powerpc/include/asm/book3s/64/slice.h > +++ b/arch/powerpc/include/asm/book3s/64/slice.h > @@ -63,6 +63,13 @@ static inline void slice_bitmap_set(unsigned long *map, unsigned int start, > { > bitmap_set(map, start, nbits); > } > + > +static inline void slice_bitmap_copy(unsigned long *dst, > + const unsigned long *src, > + unsigned int nbits) > +{ > + bitmap_copy(dst, src, nbits); > +} > #endif /* __ASSEMBLY__ */ > > #else /* CONFIG_PPC_MM_SLICES */ > diff --git a/arch/powerpc/include/asm/nohash/32/slice.h b/arch/powerpc/include/asm/nohash/32/slice.h > index bcb4924f7d22..38f041e01a0a 100644 > --- a/arch/powerpc/include/asm/nohash/32/slice.h > +++ b/arch/powerpc/include/asm/nohash/32/slice.h > @@ -58,6 +58,12 @@ static inline void slice_bitmap_set(unsigned long *map, unsigned int start, > unsigned int nbits) > { > } > + > +static inline void slice_bitmap_copy(unsigned long *dst, > + const unsigned long *src, > + unsigned int nbits) > +{ > +} > #endif /* __ASSEMBLY__ */ > > #endif /* CONFIG_PPC_MM_SLICES */ > diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c > index 311168ca3939..b8b691369c29 100644 > --- a/arch/powerpc/mm/slice.c > +++ b/arch/powerpc/mm/slice.c > @@ -468,21 +468,30 @@ static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len, > return slice_find_area_bottomup(mm, len, mask, psize, high_limit); > } > > -static inline void slice_or_mask(struct slice_mask *dst, > +static inline void slice_copy_mask(struct slice_mask *dst, > const struct slice_mask *src) > { > - dst->low_slices |= src->low_slices; > - slice_bitmap_or(dst->high_slices, dst->high_slices, src->high_slices, > + dst->low_slices = src->low_slices; > + slice_bitmap_copy(dst->high_slices, src->high_slices, SLICE_NUM_HIGH); > +} > + > +static inline void slice_or_mask(struct slice_mask *dst, > + const struct slice_mask *src1, > + const struct slice_mask *src2) > +{ > + dst->low_slices = src1->low_slices | src2->low_slices; > + slice_bitmap_or(dst->high_slices, src1->high_slices, src2->high_slices, > SLICE_NUM_HIGH); > } > > static inline void slice_andnot_mask(struct slice_mask *dst, > - const struct slice_mask *src) > + const struct slice_mask *src1, > + const struct slice_mask *src2) > { > - dst->low_slices &= ~src->low_slices; > + dst->low_slices = src1->low_slices & ~src2->low_slices; > > - slice_bitmap_andnot(dst->high_slices, dst->high_slices, > - src->high_slices, SLICE_NUM_HIGH); > + slice_bitmap_andnot(dst->high_slices, src1->high_slices, > + src2->high_slices, SLICE_NUM_HIGH); > } > > #ifdef CONFIG_PPC_64K_PAGES > @@ -495,10 +504,10 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > unsigned long flags, unsigned int psize, > int topdown) > { > - struct slice_mask mask; > struct slice_mask good_mask; > struct slice_mask potential_mask; > - struct slice_mask compat_mask; > + const struct slice_mask *maskp; > + const struct slice_mask *compat_maskp = NULL; > int fixed = (flags & MAP_FIXED); > int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); > unsigned long page_size = 1UL << pshift; > @@ -537,9 +546,6 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > potential_mask.low_slices = 0; > slice_bitmap_zero(potential_mask.high_slices, SLICE_NUM_HIGH); > > - compat_mask.low_slices = 0; > - slice_bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH); > - > /* Sanity checks */ > BUG_ON(mm->task_size == 0); > BUG_ON(mm->context.slb_addr_limit == 0); > @@ -562,7 +568,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > /* First make up a "good" mask of slices that have the right size > * already > */ > - good_mask = *slice_mask_for_size(mm, psize); > + maskp = slice_mask_for_size(mm, psize); > slice_print_mask(" good_mask", &good_mask); > > /* > @@ -587,11 +593,16 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > #ifdef CONFIG_PPC_64K_PAGES > /* If we support combo pages, we can allow 64k pages in 4k slices */ > if (psize == MMU_PAGE_64K) { > - compat_mask = *slice_mask_for_size(mm, MMU_PAGE_4K); > + compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K); > if (fixed) > - slice_or_mask(&good_mask, &compat_mask); > - } > + slice_or_mask(&good_mask, maskp, compat_maskp); > + else > + slice_copy_mask(&good_mask, maskp); > + } else > #endif > + { > + slice_copy_mask(&good_mask, maskp); > + } > > /* First check hint if it's valid or if we have MAP_FIXED */ > if (addr || fixed) { > @@ -621,7 +632,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > * empty and thus can be converted > */ > slice_mask_for_free(mm, &potential_mask, high_limit); > - slice_or_mask(&potential_mask, &good_mask); > + slice_or_mask(&potential_mask, &potential_mask, &good_mask); > slice_print_mask(" potential", &potential_mask); > > if (addr || fixed) { > @@ -658,7 +669,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > #ifdef CONFIG_PPC_64K_PAGES > if (addr == -ENOMEM && psize == MMU_PAGE_64K) { > /* retry the search with 4k-page slices included */ > - slice_or_mask(&potential_mask, &compat_mask); > + slice_or_mask(&potential_mask, &potential_mask, compat_maskp); > addr = slice_find_area(mm, len, &potential_mask, > psize, topdown, high_limit); > } > @@ -667,16 +678,17 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > if (addr == -ENOMEM) > return -ENOMEM; > > - slice_range_to_mask(addr, len, &mask); > + slice_range_to_mask(addr, len, &potential_mask); Can we avoid reusing variables like that. I had to look at the change below to ensure that this is intended change. The print below is wrong then. In my patch series I did try to remove potential_mask and compat mask and used tmp_mask in its place. IMHO that resulted in much simpiler code. I do recompute tmp_mask for 4K size because of lack of variable. But then i guess with your change to cache mask for different page size, it should not have an impact? > slice_dbg(" found potential area at 0x%lx\n", addr); > - slice_print_mask(" mask", &mask); > + slice_print_mask(" mask", maskp); > > convert: > - slice_andnot_mask(&mask, &good_mask); > - slice_andnot_mask(&mask, &compat_mask); > - if (mask.low_slices || > - !slice_bitmap_empty(mask.high_slices, SLICE_NUM_HIGH)) { > - slice_convert(mm, &mask, psize); > + slice_andnot_mask(&potential_mask, &potential_mask, &good_mask); > + if (compat_maskp && !fixed) > + slice_andnot_mask(&potential_mask, &potential_mask, compat_maskp); > + if (potential_mask.low_slices || > + !slice_bitmap_empty(potential_mask.high_slices, SLICE_NUM_HIGH)) { > + slice_convert(mm, &potential_mask, psize); > if (psize > MMU_PAGE_BASE) > on_each_cpu(slice_flush_segments, mm, 1); > } > @@ -834,19 +846,22 @@ void slice_set_range_psize(struct mm_struct *mm, unsigned long start, > int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, > unsigned long len) > { > - struct slice_mask available; > + const struct slice_mask *maskp; > unsigned int psize = mm->context.user_psize; > > if (radix_enabled()) > return 0; > > - available = *slice_mask_for_size(mm, psize); > + maskp = slice_mask_for_size(mm, psize); > #ifdef CONFIG_PPC_64K_PAGES > /* We need to account for 4k slices too */ > if (psize == MMU_PAGE_64K) { > - struct slice_mask compat_mask; > - compat_mask = *slice_mask_for_size(mm, MMU_PAGE_4K); > - slice_or_mask(&available, &compat_mask); > + const struct slice_mask *compat_maskp; > + struct slice_mask available; > + > + compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K); > + slice_or_mask(&available, maskp, compat_maskp); > + return !slice_check_range_fits(mm, &available, addr, len); > } > #endif > > @@ -856,6 +871,6 @@ int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr, > slice_print_mask(" mask", &mask); > slice_print_mask(" available", &available); > #endif > - return !slice_check_range_fits(mm, &available, addr, len); > + return !slice_check_range_fits(mm, maskp, addr, len); > } > #endif > -- > 2.13.3