Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752999AbdFLReC (ORCPT ); Mon, 12 Jun 2017 13:34:02 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:39484 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752793AbdFLP1y (ORCPT ); Mon, 12 Jun 2017 11:27:54 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Pavel Tatashin , Bob Picco , Steven Sistare , "David S. Miller" Subject: [PATCH 4.11 027/150] sparc64: add per-cpu mm of secondary contexts Date: Mon, 12 Jun 2017 17:23:54 +0200 Message-Id: <20170612152520.604397789@linuxfoundation.org> X-Mailer: git-send-email 2.13.1 In-Reply-To: <20170612152519.404936272@linuxfoundation.org> References: <20170612152519.404936272@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2282 Lines: 62 4.11-stable review patch. If anyone has any objections, please let me know. ------------------ From: Pavel Tatashin [ Upstream commit 7a5b4bbf49fe86ce77488a70c5dccfe2d50d7a2d ] The new wrap is going to use information from this array to figure out mm's that currently have valid secondary contexts setup. Signed-off-by: Pavel Tatashin Reviewed-by: Bob Picco Reviewed-by: Steven Sistare Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- arch/sparc/include/asm/mmu_context_64.h | 5 +++-- arch/sparc/mm/init_64.c | 1 + 2 files changed, 4 insertions(+), 2 deletions(-) --- a/arch/sparc/include/asm/mmu_context_64.h +++ b/arch/sparc/include/asm/mmu_context_64.h @@ -19,6 +19,7 @@ extern spinlock_t ctx_alloc_lock; extern unsigned long tlb_context_cache; extern unsigned long mmu_context_bmap[]; +DECLARE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm); void get_new_mmu_context(struct mm_struct *mm); #ifdef CONFIG_SMP void smp_new_mmu_context_version(void); @@ -76,8 +77,9 @@ void __flush_tlb_mm(unsigned long, unsig static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk) { unsigned long ctx_valid, flags; - int cpu; + int cpu = smp_processor_id(); + per_cpu(per_cpu_secondary_mm, cpu) = mm; if (unlikely(mm == &init_mm)) return; @@ -123,7 +125,6 @@ static inline void switch_mm(struct mm_s * for the first time, we must flush that context out of the * local TLB. */ - cpu = smp_processor_id(); if (!ctx_valid || !cpumask_test_cpu(cpu, mm_cpumask(mm))) { cpumask_set_cpu(cpu, mm_cpumask(mm)); __flush_tlb_mm(CTX_HWBITS(mm->context), --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -711,6 +711,7 @@ unsigned long tlb_context_cache = CTX_FI #define MAX_CTX_NR (1UL << CTX_NR_BITS) #define CTX_BMAP_SLOTS BITS_TO_LONGS(MAX_CTX_NR) DECLARE_BITMAP(mmu_context_bmap, MAX_CTX_NR); +DEFINE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm) = {0}; /* Caller does TLB context flushing on local CPU if necessary. * The caller also ensures that CTX_VALID(mm->context) is false.