Received: by 10.223.176.46 with SMTP id f43csp701400wra; Fri, 19 Jan 2018 00:25:39 -0800 (PST) X-Google-Smtp-Source: ACJfBosyfNmcSAyqXxn4v1nLlQZtxoXm9jIk6cQt1niFYtj2QUJ3WFjYRPv6GMP7AAklw1Xx84CP X-Received: by 10.99.171.12 with SMTP id p12mr12885867pgf.304.1516350339878; Fri, 19 Jan 2018 00:25:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516350339; cv=none; d=google.com; s=arc-20160816; b=aLKgXoXcKU0AF2vyphgeXfBexsoR8O2lnBuaTitT2xQLGkTY7d8xEnYsFNx+gUXrv9 gl69qbgHwZM12C1zocraE4GGzR/6zcphjOIFLwYKSmQKqWYiEvyx+bHm69Lyqn1gvf3W FPrEtjNz7KLIis1nAwfHor1Hu4Jy3TCKTvH/8+9MnCg8RwM/tA3+Cg59/azpgK/RgFXL JdZ0D/+ww7eWNq8g2+IZwJSMfIqUjg63I42bvKw8kQuCqux09YJVH6XS2lLWtc0qyVok r40ZQsfGPVNxhamJqOBPjm2fxeK4wAurV3B8lLbQhM60d+tuQimJHe71ebDQgBZWnCzK 7EhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:mime-version:date:references :in-reply-to:subject:cc:to:from:arc-authentication-results; bh=k5fnv7t+kmp/63v/cYYKUCuG5IfG9dFUSdemhKZCWpY=; b=jaVQS6nmuRwx1uzydk6grj/oNxdAW18lvvr2FJ/KR+mgGcPU56TUnRix0lh7Vi2ZCU c78/mqvOnhrdF+IsCjgYHSHxDH7GmA5w54uyKdliHLpIq2SZZMG5J5Y1Py77B24oBOYV +vqt7W7eqYnauzYPz53noG41WRFABtSjIFbgEsuVBDj/DDELj/zJELJy9au1hsZ1cd+R m6k2+U2VRYzxo/TtwILBZA+RyxsPEcYDF+tlDZ5eQNaCGvGyeZ03UtzPBW4r3E9+0oL5 4jmpr/nanfwAVf3LrwkG90siFfgqlSFr0cWi6n/GcJ5TfikPDLRGWeEo7fPe8bwfjyhZ bq2Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i12-v6si653456plk.139.2018.01.19.00.25.25; Fri, 19 Jan 2018 00:25:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754777AbeASIYx (ORCPT + 99 others); Fri, 19 Jan 2018 03:24:53 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:34976 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753741AbeASIYr (ORCPT ); Fri, 19 Jan 2018 03:24:47 -0500 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w0J8OIcV089883 for ; Fri, 19 Jan 2018 03:24:46 -0500 Received: from e06smtp13.uk.ibm.com (e06smtp13.uk.ibm.com [195.75.94.109]) by mx0b-001b2d01.pphosted.com with ESMTP id 2fk8y19b1u-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 19 Jan 2018 03:24:46 -0500 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 19 Jan 2018 08:24:42 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp13.uk.ibm.com (192.168.101.143) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 19 Jan 2018 08:24:38 -0000 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w0J8OcFc50790450; Fri, 19 Jan 2018 08:24:38 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 686C011C15C; Fri, 19 Jan 2018 08:18:20 +0000 (GMT) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 63D5E11C15A; Fri, 19 Jan 2018 08:18:18 +0000 (GMT) Received: from skywalker (unknown [9.85.72.196]) by d06av25.portsmouth.uk.ibm.com (Postfix) with SMTP; Fri, 19 Jan 2018 08:18:18 +0000 (GMT) Received: (nullmailer pid 20061 invoked by uid 1000); Fri, 19 Jan 2018 08:24:34 -0000 From: "Aneesh Kumar K.V" To: Christophe Leroy , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Scott Wood Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 1/5] powerpc/mm: Enhance 'slice' for supporting PPC32 In-Reply-To: <49148d07955d3e5f963cedf9adcfcc37c3e03ef4.1516179904.git.christophe.leroy@c-s.fr> References: <49148d07955d3e5f963cedf9adcfcc37c3e03ef4.1516179904.git.christophe.leroy@c-s.fr> Date: Fri, 19 Jan 2018 13:54:34 +0530 MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-GCONF: 00 x-cbid: 18011908-0012-0000-0000-000005A57B61 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18011908-0013-0000-0000-00001920F8DF Message-Id: <87vafyz265.fsf@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-01-19_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=7 phishscore=0 bulkscore=0 spamscore=0 clxscore=1011 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1801190108 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Christophe Leroy writes: > In preparation for the following patch which will fix an issue on > the 8xx by re-using the 'slices', this patch enhances the > 'slices' implementation to support 32 bits CPUs. > > On PPC32, the address space is limited to 4Gbytes, hence only the low > slices will be used. As of today, the code uses > SLICE_LOW_TOP (0x100000000ul) and compares it with addr to determine > if addr refers to low or high space. > On PPC32, such a (addr < SLICE_LOW_TOP) test is always false because > 0x100000000ul degrades to 0. Therefore, the patch modifies > SLICE_LOW_TOP to (0xfffffffful) and modifies the tests to > (addr <= SLICE_LOW_TOP) which will then always be true on PPC32 > as addr has type 'unsigned long' while not modifying the PPC64 > behaviour. > > This patch moves "slices" functions prototypes from page64.h to page.h > > The high slices use bitmaps. As bitmap functions are not prepared to > handling bitmaps of size 0, the bitmap_xxx() calls are wrapped into > slice_bitmap_xxx() macros which will take care of the 0 nbits case. > > Signed-off-by: Christophe Leroy > --- > v2: First patch of v1 serie split in two parts ; added slice_bitmap_xxx() macros. > > arch/powerpc/include/asm/page.h | 14 +++++++++ > arch/powerpc/include/asm/page_32.h | 19 ++++++++++++ > arch/powerpc/include/asm/page_64.h | 21 ++----------- > arch/powerpc/mm/hash_utils_64.c | 2 +- > arch/powerpc/mm/mmu_context_nohash.c | 7 +++++ > arch/powerpc/mm/slice.c | 60 ++++++++++++++++++++++++------------ > 6 files changed, 83 insertions(+), 40 deletions(-) > > diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h > index 8da5d4c1cab2..d0384f9db9eb 100644 > --- a/arch/powerpc/include/asm/page.h > +++ b/arch/powerpc/include/asm/page.h > @@ -342,6 +342,20 @@ typedef struct page *pgtable_t; > #endif > #endif > > +#ifdef CONFIG_PPC_MM_SLICES > +struct mm_struct; > + > +unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > + unsigned long flags, unsigned int psize, > + int topdown); > + > +unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr); > + > +void slice_set_user_psize(struct mm_struct *mm, unsigned int psize); > +void slice_set_range_psize(struct mm_struct *mm, unsigned long start, > + unsigned long len, unsigned int psize); > +#endif > + Should we do a slice.h ? the way we have other files? and then do arch/powerpc/include/asm/book3s/64/slice.h that will carry #define slice_bitmap_zero(dst, nbits) \ do { if (nbits) bitmap_zero(dst, nbits); } while (0) #define slice_bitmap_set(dst, pos, nbits) \ do { if (nbits) bitmap_set(dst, pos, nbits); } while (0) #define slice_bitmap_copy(dst, src, nbits) \ do { if (nbits) bitmap_copy(dst, src, nbits); } while (0) #define slice_bitmap_and(dst, src1, src2, nbits) \ ({ (nbits) ? bitmap_and(dst, src1, src2, nbits) : 0; }) #define slice_bitmap_or(dst, src1, src2, nbits) \ do { if (nbits) bitmap_or(dst, src1, src2, nbits); } while (0) #define slice_bitmap_andnot(dst, src1, src2, nbits) \ ({ (nbits) ? bitmap_andnot(dst, src1, src2, nbits) : 0; }) #define slice_bitmap_equal(src1, src2, nbits) \ ({ (nbits) ? bitmap_equal(src1, src2, nbits) : 1; }) #define slice_bitmap_empty(src, nbits) \ ({ (nbits) ? bitmap_empty(src, nbits) : 1; }) This without that if(nbits) check and a proper static inline so that we can do type checking. also related definitions for #define SLICE_LOW_SHIFT 28 #define SLICE_HIGH_SHIFT 0 #define SLICE_LOW_TOP (0xfffffffful) #define SLICE_NUM_LOW ((SLICE_LOW_TOP >> SLICE_LOW_SHIFT) + 1) +#define SLICE_NUM_HIGH 0ul Common stuff between 64 and 32 can got to arch/powerpc/include/asm/slice.h ? It also gives an indication of which 32 bit version we are looking at here. IIUC 8xx will got to arch/powerpc/include/asm/nohash/32/slice.h? > #include > #endif /* __ASSEMBLY__ */ > > diff --git a/arch/powerpc/include/asm/page_32.h b/arch/powerpc/include/asm/page_32.h > index 5c378e9b78c8..f7d1bd1183c8 100644 > --- a/arch/powerpc/include/asm/page_32.h > +++ b/arch/powerpc/include/asm/page_32.h > @@ -60,4 +60,23 @@ extern void copy_page(void *to, void *from); > > #endif /* __ASSEMBLY__ */ > > +#ifdef CONFIG_PPC_MM_SLICES > + > +#define SLICE_LOW_SHIFT 28 > +#define SLICE_HIGH_SHIFT 0 > + > +#define SLICE_LOW_TOP (0xfffffffful) > +#define SLICE_NUM_LOW ((SLICE_LOW_TOP >> SLICE_LOW_SHIFT) + 1) > +#define SLICE_NUM_HIGH 0ul > + > +#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT) > +#define GET_HIGH_SLICE_INDEX(addr) (addr & 0) > + > +#ifdef CONFIG_HUGETLB_PAGE > +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA > +#endif > +#define HAVE_ARCH_UNMAPPED_AREA > +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN > + > +#endif > #endif /* _ASM_POWERPC_PAGE_32_H */ > diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/page_64.h > index 56234c6fcd61..a7baef5bbe5f 100644 > --- a/arch/powerpc/include/asm/page_64.h > +++ b/arch/powerpc/include/asm/page_64.h > @@ -91,30 +91,13 @@ extern u64 ppc64_pft_size; > #define SLICE_LOW_SHIFT 28 > #define SLICE_HIGH_SHIFT 40 > > -#define SLICE_LOW_TOP (0x100000000ul) > -#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) > +#define SLICE_LOW_TOP (0xfffffffful) > +#define SLICE_NUM_LOW ((SLICE_LOW_TOP >> SLICE_LOW_SHIFT) + 1) > #define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT) > > #define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT) > #define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT) > > -#ifndef __ASSEMBLY__ > -struct mm_struct; > - > -extern unsigned long slice_get_unmapped_area(unsigned long addr, > - unsigned long len, > - unsigned long flags, > - unsigned int psize, > - int topdown); > - > -extern unsigned int get_slice_psize(struct mm_struct *mm, > - unsigned long addr); > - > -extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize); > -extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start, > - unsigned long len, unsigned int psize); > - > -#endif /* __ASSEMBLY__ */ > #else > #define slice_init() > #ifdef CONFIG_PPC_BOOK3S_64 > diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c > index 655a5a9a183d..3266b3326088 100644 > --- a/arch/powerpc/mm/hash_utils_64.c > +++ b/arch/powerpc/mm/hash_utils_64.c > @@ -1101,7 +1101,7 @@ static unsigned int get_paca_psize(unsigned long addr) > unsigned char *hpsizes; > unsigned long index, mask_index; > > - if (addr < SLICE_LOW_TOP) { > + if (addr <= SLICE_LOW_TOP) { > lpsizes = get_paca()->mm_ctx_low_slices_psize; > index = GET_LOW_SLICE_INDEX(addr); > return (lpsizes >> (index * 4)) & 0xF; > diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c > index 4554d6527682..42e02f5b6660 100644 > --- a/arch/powerpc/mm/mmu_context_nohash.c > +++ b/arch/powerpc/mm/mmu_context_nohash.c > @@ -331,6 +331,13 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm) > { > pr_hard("initing context for mm @%p\n", mm); > > +#ifdef CONFIG_PPC_MM_SLICES > + if (!mm->context.slb_addr_limit) > + mm->context.slb_addr_limit = DEFAULT_MAP_WINDOW; > + if (!mm->context.id) > + slice_set_user_psize(mm, mmu_virtual_psize); > +#endif > + > mm->context.id = MMU_NO_CONTEXT; > mm->context.active = 0; > return 0; > diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c > index 23ec2c5e3b78..3f35a93afe13 100644 > --- a/arch/powerpc/mm/slice.c > +++ b/arch/powerpc/mm/slice.c > @@ -67,16 +67,33 @@ static void slice_print_mask(const char *label, struct slice_mask mask) {} > > #endif > > +#define slice_bitmap_zero(dst, nbits) \ > + do { if (nbits) bitmap_zero(dst, nbits); } while (0) > +#define slice_bitmap_set(dst, pos, nbits) \ > + do { if (nbits) bitmap_set(dst, pos, nbits); } while (0) > +#define slice_bitmap_copy(dst, src, nbits) \ > + do { if (nbits) bitmap_copy(dst, src, nbits); } while (0) > +#define slice_bitmap_and(dst, src1, src2, nbits) \ > + ({ (nbits) ? bitmap_and(dst, src1, src2, nbits) : 0; }) > +#define slice_bitmap_or(dst, src1, src2, nbits) \ > + do { if (nbits) bitmap_or(dst, src1, src2, nbits); } while (0) > +#define slice_bitmap_andnot(dst, src1, src2, nbits) \ > + ({ (nbits) ? bitmap_andnot(dst, src1, src2, nbits) : 0; }) > +#define slice_bitmap_equal(src1, src2, nbits) \ > + ({ (nbits) ? bitmap_equal(src1, src2, nbits) : 1; }) > +#define slice_bitmap_empty(src, nbits) \ > + ({ (nbits) ? bitmap_empty(src, nbits) : 1; }) > + > static void slice_range_to_mask(unsigned long start, unsigned long len, > struct slice_mask *ret) > { > unsigned long end = start + len - 1; > > ret->low_slices = 0; > - bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); > + slice_bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); > > - if (start < SLICE_LOW_TOP) { > - unsigned long mend = min(end, (SLICE_LOW_TOP - 1)); > + if (start <= SLICE_LOW_TOP) { > + unsigned long mend = min(end, SLICE_LOW_TOP); > > ret->low_slices = (1u << (GET_LOW_SLICE_INDEX(mend) + 1)) > - (1u << GET_LOW_SLICE_INDEX(start)); > @@ -87,7 +104,7 @@ static void slice_range_to_mask(unsigned long start, unsigned long len, > unsigned long align_end = ALIGN(end, (1UL << SLICE_HIGH_SHIFT)); > unsigned long count = GET_HIGH_SLICE_INDEX(align_end) - start_index; > > - bitmap_set(ret->high_slices, start_index, count); > + slice_bitmap_set(ret->high_slices, start_index, count); > } > } > > @@ -117,7 +134,7 @@ static int slice_high_has_vma(struct mm_struct *mm, unsigned long slice) > * of the high or low area bitmaps, the first high area starts > * at 4GB, not 0 */ > if (start == 0) > - start = SLICE_LOW_TOP; > + start = SLICE_LOW_TOP + 1; > > return !slice_area_is_free(mm, start, end - start); > } > @@ -128,7 +145,7 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret, > unsigned long i; > > ret->low_slices = 0; > - bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); > + slice_bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); > > for (i = 0; i < SLICE_NUM_LOW; i++) > if (!slice_low_has_vma(mm, i)) > @@ -151,7 +168,7 @@ static void slice_mask_for_size(struct mm_struct *mm, int psize, struct slice_ma > u64 lpsizes; > > ret->low_slices = 0; > - bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); > + slice_bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); > > lpsizes = mm->context.low_slices_psize; > for (i = 0; i < SLICE_NUM_LOW; i++) > @@ -180,11 +197,11 @@ static int slice_check_fit(struct mm_struct *mm, > */ > unsigned long slice_count = GET_HIGH_SLICE_INDEX(mm->context.slb_addr_limit); > > - bitmap_and(result, mask.high_slices, > - available.high_slices, slice_count); > + slice_bitmap_and(result, mask.high_slices, > + available.high_slices, slice_count); > > return (mask.low_slices & available.low_slices) == mask.low_slices && > - bitmap_equal(result, mask.high_slices, slice_count); > + slice_bitmap_equal(result, mask.high_slices, slice_count)); > } > > static void slice_flush_segments(void *parm) > @@ -259,7 +276,7 @@ static bool slice_scan_available(unsigned long addr, > unsigned long *boundary_addr) > { > unsigned long slice; > - if (addr < SLICE_LOW_TOP) { > + if (addr <= SLICE_LOW_TOP) { > slice = GET_LOW_SLICE_INDEX(addr); > *boundary_addr = (slice + end) << SLICE_LOW_SHIFT; > return !!(available.low_slices & (1u << slice)); > @@ -391,8 +408,9 @@ static inline void slice_or_mask(struct slice_mask *dst, struct slice_mask *src) > DECLARE_BITMAP(result, SLICE_NUM_HIGH); > > dst->low_slices |= src->low_slices; > - bitmap_or(result, dst->high_slices, src->high_slices, SLICE_NUM_HIGH); > - bitmap_copy(dst->high_slices, result, SLICE_NUM_HIGH); > + slice_bitmap_or(result, dst->high_slices, src->high_slices, > + SLICE_NUM_HIGH); > + slice_bitmap_copy(dst->high_slices, result, SLICE_NUM_HIGH); > } > > static inline void slice_andnot_mask(struct slice_mask *dst, struct slice_mask *src) > @@ -401,8 +419,9 @@ static inline void slice_andnot_mask(struct slice_mask *dst, struct slice_mask * > > dst->low_slices &= ~src->low_slices; > > - bitmap_andnot(result, dst->high_slices, src->high_slices, SLICE_NUM_HIGH); > - bitmap_copy(dst->high_slices, result, SLICE_NUM_HIGH); > + slice_bitmap_andnot(result, dst->high_slices, src->high_slices, > + SLICE_NUM_HIGH); > + slice_bitmap_copy(dst->high_slices, result, SLICE_NUM_HIGH); > } > > #ifdef CONFIG_PPC_64K_PAGES > @@ -450,14 +469,14 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > * init different masks > */ > mask.low_slices = 0; > - bitmap_zero(mask.high_slices, SLICE_NUM_HIGH); > + slice_bitmap_zero(mask.high_slices, SLICE_NUM_HIGH); > > /* silence stupid warning */; > potential_mask.low_slices = 0; > - bitmap_zero(potential_mask.high_slices, SLICE_NUM_HIGH); > + slice_bitmap_zero(potential_mask.high_slices, SLICE_NUM_HIGH); > > compat_mask.low_slices = 0; > - bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH); > + slice_bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH); > > /* Sanity checks */ > BUG_ON(mm->task_size == 0); > @@ -595,7 +614,8 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, > convert: > slice_andnot_mask(&mask, &good_mask); > slice_andnot_mask(&mask, &compat_mask); > - if (mask.low_slices || !bitmap_empty(mask.high_slices, SLICE_NUM_HIGH)) { > + if (mask.low_slices || > + !slice_bitmap_empty(mask.high_slices, SLICE_NUM_HIGH)) { > slice_convert(mm, mask, psize); > if (psize > MMU_PAGE_BASE) > on_each_cpu(slice_flush_segments, mm, 1); > @@ -640,7 +660,7 @@ unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr) > return MMU_PAGE_4K; > #endif > } > - if (addr < SLICE_LOW_TOP) { > + if (addr <= SLICE_LOW_TOP) { > u64 lpsizes; > lpsizes = mm->context.low_slices_psize; > index = GET_LOW_SLICE_INDEX(addr); > -- > 2.13.3