Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2338964imm; Sat, 28 Jul 2018 14:57:23 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd4dDR459cHUnQ8RNvxgtxxbABeDuZYyQfp/rSoL3uODNWKgp4VMW1wYox0qLDVG/bAj5As X-Received: by 2002:a65:5a8a:: with SMTP id c10-v6mr10528595pgt.389.1532815043807; Sat, 28 Jul 2018 14:57:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532815043; cv=none; d=google.com; s=arc-20160816; b=k0Fiz70hxc9N0pcpwHP0g7/KdfY4g1C/nZUZsA6K+A+GgVuPe0dxXYmDTXJCGK8Ogp kycix+NUgTa6SpncFDyL/pCvbPxaQfwbUfhNBmznga+cO+MA031R87ILnWnJQ3PeEpOJ xbQn1q8QQw7nAtV+s2RBFfa1PeXm2+M8a7mVxP6GWk+Rn4S5t6v+sqNeNIxPGLBVm/fJ XeSMP4EWzD+9eZPUBo10+HpgK27X6uVslr0rMFxv8mBVyItaBRoa1PkguYWwHRJ+M/aW Ac09sJSLnOI0txrY+jMRndL6e+iRRwADibdvHsdH0nx/IhMY9JClma1erITEXoaCdaOE TgHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=KpHGTPHYRMyQXemx8d9ET76owYDeK90CSddsx79QExU=; b=dOSxb+LzfTpNkIVHUWXhlnXHUg7YiKlyQaZ9nFzcCgNuMzAEaarNAqJ6CSIYceqy3e c1fBp2EcBT3RpsX1lDZgh532gaEoi+T4R5hthhxaw/u+/D+N6uG21j8HE98ftaCeOJGa EGSFAboH6SrRlGPOprL5nh3YUasXlXEv27FOfaKt2hrldKg7+PvpDO0R6vQC+H426Ox9 rVjXArQwqaxa4VVTUHKs5iFCr8A8+6apPhoRNDgUcHURKlRyBw+op1wBpceaWpv3KSty xcV46zjbRIjwE0NrYkiN5VX3ekSCl40oy9XKIwObOtMIAFLK16gF+fxwySD64+10oe2h xXKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q65-v6si7525262pfb.9.2018.07.28.14.56.33; Sat, 28 Jul 2018 14:57:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731419AbeG1XWV (ORCPT + 99 others); Sat, 28 Jul 2018 19:22:21 -0400 Received: from shelob.surriel.com ([96.67.55.147]:44124 "EHLO shelob.surriel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731346AbeG1XWU (ORCPT ); Sat, 28 Jul 2018 19:22:20 -0400 Received: from imladris.surriel.com ([96.67.55.152]) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.90_1) (envelope-from ) id 1fjXA0-0000QZ-Tv; Sat, 28 Jul 2018 17:54:00 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: kernel-team@fb.com, peterz@infradead.org, luto@kernel.org, x86@kernel.org, vkuznets@redhat.com, mingo@kernel.org, efault@gmx.de, dave.hansen@intel.com, will.daecon@arm.com, catalin.marinas@arm.com, benh@kernel.crashing.org, Rik van Riel Subject: [PATCH 10/10] mm,sched: conditionally skip lazy TLB mm refcounting Date: Sat, 28 Jul 2018 17:53:57 -0400 Message-Id: <20180728215357.3249-11-riel@surriel.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180728215357.3249-1-riel@surriel.com> References: <20180728215357.3249-1-riel@surriel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Conditionally skip lazy TLB mm refcounting. When an architecture has CONFIG_ARCH_NO_ACTIVE_MM_REFCOUNTING enabled, an mm that is used in lazy TLB mode anywhere will get shot down from exit_mmap, and there in no need to incur the cache line bouncing overhead of refcounting a lazy TLB mm. Implement this by moving the refcounting of a lazy TLB mm to helper functions, which skip the refcounting when it is not necessary. Deal with use_mm and unuse_mm by fully splitting out the refcounting of the lazy TLB mm a kernel thread may have when entering use_mm from the refcounting of the mm that use_mm is about to start using. Signed-off-by: Rik van Riel --- fs/exec.c | 2 +- include/linux/sched/mm.h | 25 +++++++++++++++++++++++++ kernel/sched/core.c | 6 +++--- mm/mmu_context.c | 21 ++++++++++++++------- 4 files changed, 43 insertions(+), 11 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index bdd0eacefdf5..7a6d4811b02b 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1043,7 +1043,7 @@ static int exec_mmap(struct mm_struct *mm) mmput(old_mm); return 0; } - mmdrop(active_mm); + drop_lazy_mm(active_mm); return 0; } diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index 44d356f5e47c..7308bf38012f 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -49,6 +49,31 @@ static inline void mmdrop(struct mm_struct *mm) __mmdrop(mm); } +/* + * In lazy TLB mode, a CPU keeps the mm of the last process mapped while + * running a kernel thread or idle; we must make sure the lazy TLB mm and + * page tables do not disappear while a lazy TLB mode CPU uses them. + * There are two ways to handle the race between lazy TLB CPUs and exit_mmap: + * 1) Have a lazy TLB CPU hold a refcount on the lazy TLB mm. + * 2) Have the architecture code shoot down the lazy TLB mm from exit_mmap; + * in that case, refcounting can be skipped, reducing cache line bouncing. + */ +static inline void grab_lazy_mm(struct mm_struct *mm) +{ + if (IS_ENABLED(CONFIG_ARCH_NO_ACTIVE_MM_REFCOUNTING)) + return; + + mmgrab(mm); +} + +static inline void drop_lazy_mm(struct mm_struct *mm) +{ + if (IS_ENABLED(CONFIG_ARCH_NO_ACTIVE_MM_REFCOUNTING)) + return; + + mmdrop(mm); +} + /** * mmget() - Pin the address space associated with a &struct mm_struct. * @mm: The address space to pin. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c45de46fdf10..11724c9e88b0 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2691,7 +2691,7 @@ static struct rq *finish_task_switch(struct task_struct *prev) */ if (mm) { membarrier_mm_sync_core_before_usermode(mm); - mmdrop(mm); + drop_lazy_mm(mm); } if (unlikely(prev_state == TASK_DEAD)) { if (prev->sched_class->task_dead) @@ -2805,7 +2805,7 @@ context_switch(struct rq *rq, struct task_struct *prev, */ if (!mm) { next->active_mm = oldmm; - mmgrab(oldmm); + grab_lazy_mm(oldmm); enter_lazy_tlb(oldmm, next); } else switch_mm_irqs_off(oldmm, mm, next); @@ -5532,7 +5532,7 @@ void idle_task_exit(void) current->active_mm = &init_mm; finish_arch_post_lock_switch(); } - mmdrop(mm); + drop_lazy_mm(mm); } /* diff --git a/mm/mmu_context.c b/mm/mmu_context.c index 3e612ae748e9..d5c2524cdd9a 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -24,12 +24,15 @@ void use_mm(struct mm_struct *mm) struct mm_struct *active_mm; struct task_struct *tsk = current; + /* Kernel threads have a NULL tsk->mm when entering. */ + WARN_ON(tsk->mm); + task_lock(tsk); + /* Previous ->active_mm was held in lazy TLB mode. */ active_mm = tsk->active_mm; - if (active_mm != mm) { - mmgrab(mm); - tsk->active_mm = mm; - } + /* Grab mm for reals; tsk->mm needs to stick around until unuse_mm. */ + mmgrab(mm); + tsk->active_mm = mm; tsk->mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); @@ -37,8 +40,9 @@ void use_mm(struct mm_struct *mm) finish_arch_post_lock_switch(); #endif - if (active_mm != mm) - mmdrop(active_mm); + /* Drop the lazy TLB mode mm. */ + if (active_mm) + drop_lazy_mm(active_mm); } EXPORT_SYMBOL_GPL(use_mm); @@ -57,8 +61,11 @@ void unuse_mm(struct mm_struct *mm) task_lock(tsk); sync_mm_rss(mm); tsk->mm = NULL; - /* active_mm is still 'mm' */ + /* active_mm is still 'mm'; grab it as a lazy TLB mm */ + grab_lazy_mm(mm); enter_lazy_tlb(mm, tsk); + /* drop the tsk->mm refcount */ + mmdrop(mm); task_unlock(tsk); } EXPORT_SYMBOL_GPL(unuse_mm); -- 2.14.4