Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2239450imm; Mon, 28 May 2018 04:34:00 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKmHVTkpeg/Ix5vL0jMIDeOiYWGdCnpMtCCqD7vPPOZ8wC+VWr3KhBH/cgVrQv0TXHGJgjN X-Received: by 2002:a63:449:: with SMTP id 70-v6mr42350pge.229.1527507240150; Mon, 28 May 2018 04:34:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527507240; cv=none; d=google.com; s=arc-20160816; b=HBDWjhfaGWOt5fkUUIIH4tbnbgOFdiyiqmdasm/F6igHcvyGkBqVO3pelFAqUzcszP IHQ6wkXt8FH94wgfX9Uv6JKmoxDcww5jEniiTskYKBWLdikbyf4o5DQqAjTlWnX5+oUo sebQFXLiemhPIjoG1tBWQYxDEKXTCmPgTI8TuJIdtsxH/foAwKQb1i77TY9wRALChkvJ fKZ2Pb9Q/w+5suLbipxHY9Rg8B4ObEQAIpAjHOukWfIZ/WjBokeUgFEy9JPc0PJBqLUK /aZkKQQ4f9oEhe1stJKNrh7wbBRDUbFcv37z5of66dZLo9yrt6O5JGDFwa6T+A0vtAis kkCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=66O5i/+FtcF/XV4wr/4Oox4AoIy53h+PhhfqUXScwWU=; b=UZKdRjFAtuJYNIVCVkUS61BYphP/NJMXqA3oXZDJ38SWRrZNEfmhnQJx2Q/9rqmkpe TqHT67M9K3tdLW2HobkzAl1byVjjai5TU0faJJq/THDN+W8GgCi7LUADoTGnatAISb1Q 2uWbf59wnaamA3NCyApk8q693AmvU/yuO2bBR4kfkdHE5J0wTIBz5ONovJ5dlA/ItLTD yeqBLZA08HPV9jVp216mFyEc9nQRRDhpsT5KDudu/LdJWz6p8sptfAOz3Q3fTMpFqtm6 tZMKo91LgtpG8dSvLNt85JOZe8AUZBSDttkwj7WEntw2hX8RYSuQgDXp/yfrsZfh/po5 nBJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ObMnHUKK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j73-v6si30619524pfa.297.2018.05.28.04.33.44; Mon, 28 May 2018 04:34:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ObMnHUKK; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S938295AbeE1Lb5 (ORCPT + 99 others); Mon, 28 May 2018 07:31:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:33656 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1424398AbeE1LOX (ORCPT ); Mon, 28 May 2018 07:14:23 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 99E5A206B7; Mon, 28 May 2018 11:14:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1527506063; bh=K8XEk3lX2iJgJ56s20g3lP/ZSOUn3mguwSdOEQVAmLY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ObMnHUKK+c51XwpbH7hTbjveVc/RzMXR/l+7DLZHasu01MEPWMcbfBGtON5HOUogC TvOBUtmWhdY9n2gcPDW0thxqHl8tWJu77DyInmhb73q48DbhERQpvVc50zzOIbINmY gUqJ0C2YTjffCNG8xXsMfOIbsJ1bJTY5yyXatkpw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Christophe Leroy , Nicholas Piggin , Michael Ellerman Subject: [PATCH 4.16 210/272] powerpc/mm/slice: Enhance for supporting PPC32 Date: Mon, 28 May 2018 12:04:03 +0200 Message-Id: <20180528100258.147686815@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180528100240.256525891@linuxfoundation.org> References: <20180528100240.256525891@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.16-stable review patch. If anyone has any objections, please let me know. ------------------ From: Christophe Leroy commit db3a528db41caaa6dfd4c64e9f5efb1c81a80467 upstream. In preparation for the following patch which will fix an issue on the 8xx by re-using the 'slices', this patch enhances the 'slices' implementation to support 32 bits CPUs. On PPC32, the address space is limited to 4Gbytes, hence only the low slices will be used. The high slices use bitmaps. As bitmap functions are not prepared to handle bitmaps of size 0, this patch ensures that bitmap functions are called only when SLICE_NUM_HIGH is not nul. Signed-off-by: Christophe Leroy Reviewed-by: Nicholas Piggin Signed-off-by: Michael Ellerman Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/include/asm/nohash/32/slice.h | 18 ++++++++++++++ arch/powerpc/include/asm/slice.h | 4 ++- arch/powerpc/mm/slice.c | 37 ++++++++++++++++++++++------- 3 files changed, 50 insertions(+), 9 deletions(-) --- /dev/null +++ b/arch/powerpc/include/asm/nohash/32/slice.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_NOHASH_32_SLICE_H +#define _ASM_POWERPC_NOHASH_32_SLICE_H + +#ifdef CONFIG_PPC_MM_SLICES + +#define SLICE_LOW_SHIFT 28 +#define SLICE_LOW_TOP (0x100000000ull) +#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) +#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT) + +#define SLICE_HIGH_SHIFT 0 +#define SLICE_NUM_HIGH 0ul +#define GET_HIGH_SLICE_INDEX(addr) (addr & 0) + +#endif /* CONFIG_PPC_MM_SLICES */ + +#endif /* _ASM_POWERPC_NOHASH_32_SLICE_H */ --- a/arch/powerpc/include/asm/slice.h +++ b/arch/powerpc/include/asm/slice.h @@ -4,8 +4,10 @@ #ifdef CONFIG_PPC_BOOK3S_64 #include -#else +#elif defined(CONFIG_PPC64) #include +#elif defined(CONFIG_PPC_MMU_NOHASH) +#include #endif #ifdef CONFIG_PPC_MM_SLICES --- a/arch/powerpc/mm/slice.c +++ b/arch/powerpc/mm/slice.c @@ -73,10 +73,12 @@ static void slice_range_to_mask(unsigned unsigned long end = start + len - 1; ret->low_slices = 0; - bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); + if (SLICE_NUM_HIGH) + bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); if (start < SLICE_LOW_TOP) { - unsigned long mend = min(end, (SLICE_LOW_TOP - 1)); + unsigned long mend = min(end, + (unsigned long)(SLICE_LOW_TOP - 1)); ret->low_slices = (1u << (GET_LOW_SLICE_INDEX(mend) + 1)) - (1u << GET_LOW_SLICE_INDEX(start)); @@ -113,11 +115,13 @@ static int slice_high_has_vma(struct mm_ unsigned long start = slice << SLICE_HIGH_SHIFT; unsigned long end = start + (1ul << SLICE_HIGH_SHIFT); +#ifdef CONFIG_PPC64 /* Hack, so that each addresses is controlled by exactly one * of the high or low area bitmaps, the first high area starts * at 4GB, not 0 */ if (start == 0) start = SLICE_LOW_TOP; +#endif return !slice_area_is_free(mm, start, end - start); } @@ -128,7 +132,8 @@ static void slice_mask_for_free(struct m unsigned long i; ret->low_slices = 0; - bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); + if (SLICE_NUM_HIGH) + bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); for (i = 0; i < SLICE_NUM_LOW; i++) if (!slice_low_has_vma(mm, i)) @@ -151,7 +156,8 @@ static void slice_mask_for_size(struct m u64 lpsizes; ret->low_slices = 0; - bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); + if (SLICE_NUM_HIGH) + bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); lpsizes = mm->context.low_slices_psize; for (i = 0; i < SLICE_NUM_LOW; i++) @@ -180,6 +186,10 @@ static int slice_check_fit(struct mm_str */ unsigned long slice_count = GET_HIGH_SLICE_INDEX(mm->context.slb_addr_limit); + if (!SLICE_NUM_HIGH) + return (mask.low_slices & available.low_slices) == + mask.low_slices; + bitmap_and(result, mask.high_slices, available.high_slices, slice_count); @@ -189,6 +199,7 @@ static int slice_check_fit(struct mm_str static void slice_flush_segments(void *parm) { +#ifdef CONFIG_PPC64 struct mm_struct *mm = parm; unsigned long flags; @@ -200,6 +211,7 @@ static void slice_flush_segments(void *p local_irq_save(flags); slb_flush_and_rebolt(); local_irq_restore(flags); +#endif } static void slice_convert(struct mm_struct *mm, struct slice_mask mask, int psize) @@ -389,6 +401,8 @@ static unsigned long slice_find_area(str static inline void slice_or_mask(struct slice_mask *dst, struct slice_mask *src) { dst->low_slices |= src->low_slices; + if (!SLICE_NUM_HIGH) + return; bitmap_or(dst->high_slices, dst->high_slices, src->high_slices, SLICE_NUM_HIGH); } @@ -397,6 +411,8 @@ static inline void slice_andnot_mask(str { dst->low_slices &= ~src->low_slices; + if (!SLICE_NUM_HIGH) + return; bitmap_andnot(dst->high_slices, dst->high_slices, src->high_slices, SLICE_NUM_HIGH); } @@ -446,14 +462,17 @@ unsigned long slice_get_unmapped_area(un * init different masks */ mask.low_slices = 0; - bitmap_zero(mask.high_slices, SLICE_NUM_HIGH); /* silence stupid warning */; potential_mask.low_slices = 0; - bitmap_zero(potential_mask.high_slices, SLICE_NUM_HIGH); compat_mask.low_slices = 0; - bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH); + + if (SLICE_NUM_HIGH) { + bitmap_zero(mask.high_slices, SLICE_NUM_HIGH); + bitmap_zero(potential_mask.high_slices, SLICE_NUM_HIGH); + bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH); + } /* Sanity checks */ BUG_ON(mm->task_size == 0); @@ -591,7 +610,9 @@ unsigned long slice_get_unmapped_area(un convert: slice_andnot_mask(&mask, &good_mask); slice_andnot_mask(&mask, &compat_mask); - if (mask.low_slices || !bitmap_empty(mask.high_slices, SLICE_NUM_HIGH)) { + if (mask.low_slices || + (SLICE_NUM_HIGH && + !bitmap_empty(mask.high_slices, SLICE_NUM_HIGH))) { slice_convert(mm, mask, psize); if (psize > MMU_PAGE_BASE) on_each_cpu(slice_flush_segments, mm, 1);