Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp904364pxx; Tue, 27 Oct 2020 03:34:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzDsh2rN5vUhqkzuKPH6g28L6DNni3O9GaOcobAbjgo3loR7OsWcW1SanYkOpvZLkXTHng0 X-Received: by 2002:aa7:ca45:: with SMTP id j5mr1543648edt.245.1603794886406; Tue, 27 Oct 2020 03:34:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603794886; cv=none; d=google.com; s=arc-20160816; b=KPWhLQei9MGbGTdR9koCWhEdoIO+j4CMyMYa5oEdurDAB1sK91I52o0ihMomZSuOvJ G1FOjhb32PaUt2WdkpOEoFjgEb1yq7u2ItNQ1kwrb2x+cwbQSCOk2y6dGc5EKBQaP31O zY+xQc8Cf2JJaNVZrDVIfcM8e+SBbu+055udoE3948nfC9ZpXXzHG77PF7LX58HBWmRN UEApgn6chcYodoy0+j4G0VuMiku1+YUw0K26WZr59eTVQYZJc9TZ5GJjWgxH7erFE8Am aE9yHg64UL02B2O5aMxQ8omD6vSE3KveGq6gc28283nPLfoJ6KS+P7Gz9BBmMbmWjad5 myNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=shCp6Ja9P+fMg+uBCd1aa6unaoX6xmgAmV7JhnrUC6Q=; b=vURkAObRy7C6Bv03hF5u7pC/WiwUQdaygQozJL3oaKt+NQkx33CTqE5XmkjxGmiAVi BWfSLbUyruoDCKE60fCQA17djyLOjj4e3TD2z/nFNficmy4O3GJj5EsOSSQIIT1xAy2E qKVU8oSYdGZdl1qzoD31kFhIBlvOtHiJcMfe2ZrahKDAYbWGBfl/2Bxb4nUTgZ4lhPt8 /bi9lOV7VQqmfpdWLSWUigEbcRlGZSU2rT+lBjMkB0dHuaC1QQvKV8sNsgLYTdSEI1ct 0qKT1lB9nS6WK2UAj29O73n3koL1b/SqslXRLZYK9m1Z+hUo9u4YsKdb99IkEM918xVx sw6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=GJj0jOne; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p6si603793edd.206.2020.10.27.03.34.08; Tue, 27 Oct 2020 03:34:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=GJj0jOne; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436799AbgJZX5a (ORCPT + 99 others); Mon, 26 Oct 2020 19:57:30 -0400 Received: from mail.kernel.org ([198.145.29.99]:33450 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2410899AbgJZXzY (ORCPT ); Mon, 26 Oct 2020 19:55:24 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E2A7A20770; Mon, 26 Oct 2020 23:55:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603756522; bh=IXKvZ8ptF2r5eLQzATs9X+yzaoFinvLNNofxFDrLHCo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GJj0jOnee5fjWpVuXoKEHifqg1vsgxQmdvQXkBqQd+mzjh2NwlzL/RT+wxameyEyN TqH+yXsX33+yUZaShgOHUFWh+6U/Zyo13YKS9Uo6a64qNISSvKRtSgM0tyaC2EyQLW bgkzt7GW/MSzBzmMMs4JamjpcQfkaw89fj8NrNuQ= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Nicholas Piggin , "David S . Miller" , Michael Ellerman , Sasha Levin , sparclinux@vger.kernel.org Subject: [PATCH AUTOSEL 5.4 04/80] sparc64: remove mm_cpumask clearing to fix kthread_use_mm race Date: Mon, 26 Oct 2020 19:54:00 -0400 Message-Id: <20201026235516.1025100-4-sashal@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201026235516.1025100-1-sashal@kernel.org> References: <20201026235516.1025100-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Nicholas Piggin [ Upstream commit bafb056ce27940c9994ea905336aa8f27b4f7275 ] The de facto (and apparently uncommented) standard for using an mm had, thanks to this code in sparc if nothing else, been that you must have a reference on mm_users *and that reference must have been obtained with mmget()*, i.e., from a thread with a reference to mm_users that had used the mm. The introduction of mmget_not_zero() in commit d2005e3f41d4 ("userfaultfd: don't pin the user memory in userfaultfd_file_create()") allowed mm_count holders to aoperate on user mappings asynchronously from the actual threads using the mm, but they were not to load those mappings into their TLB (i.e., walking vmas and page tables is okay, kthread_use_mm() is not). io_uring 2b188cc1bb857 ("Add io_uring IO interface") added code which does a kthread_use_mm() from a mmget_not_zero() refcount. The problem with this is code which previously assumed mm == current->mm and mm->mm_users == 1 implies the mm will remain single-threaded at least until this thread creates another mm_users reference, has now broken. arch/sparc/kernel/smp_64.c: if (atomic_read(&mm->mm_users) == 1) { cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); goto local_flush_and_out; } vs fs/io_uring.c if (unlikely(!(ctx->flags & IORING_SETUP_SQPOLL) || !mmget_not_zero(ctx->sqo_mm))) return -EFAULT; kthread_use_mm(ctx->sqo_mm); mmget_not_zero() could come in right after the mm_users == 1 test, then kthread_use_mm() which sets its CPU in the mm_cpumask. That update could be lost if cpumask_copy() occurs afterward. I propose we fix this by allowing mmget_not_zero() to be a first-class reference, and not have this obscure undocumented and unchecked restriction. The basic fix for sparc64 is to remove its mm_cpumask clearing code. The optimisation could be effectively restored by sending IPIs to mm_cpumask members and having them remove themselves from mm_cpumask. This is more tricky so I leave it as an exercise for someone with a sparc64 SMP. powerpc has a (currently similarly broken) example. Signed-off-by: Nicholas Piggin Acked-by: David S. Miller Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20200914045219.3736466-4-npiggin@gmail.com Signed-off-by: Sasha Levin --- arch/sparc/kernel/smp_64.c | 65 ++++++++------------------------------ 1 file changed, 14 insertions(+), 51 deletions(-) diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c index a8275fea4b70c..aa81c25b44cf3 100644 --- a/arch/sparc/kernel/smp_64.c +++ b/arch/sparc/kernel/smp_64.c @@ -1039,38 +1039,9 @@ void smp_fetch_global_pmu(void) * are flush_tlb_*() routines, and these run after flush_cache_*() * which performs the flushw. * - * The SMP TLB coherency scheme we use works as follows: - * - * 1) mm->cpu_vm_mask is a bit mask of which cpus an address - * space has (potentially) executed on, this is the heuristic - * we use to avoid doing cross calls. - * - * Also, for flushing from kswapd and also for clones, we - * use cpu_vm_mask as the list of cpus to make run the TLB. - * - * 2) TLB context numbers are shared globally across all processors - * in the system, this allows us to play several games to avoid - * cross calls. - * - * One invariant is that when a cpu switches to a process, and - * that processes tsk->active_mm->cpu_vm_mask does not have the - * current cpu's bit set, that tlb context is flushed locally. - * - * If the address space is non-shared (ie. mm->count == 1) we avoid - * cross calls when we want to flush the currently running process's - * tlb state. This is done by clearing all cpu bits except the current - * processor's in current->mm->cpu_vm_mask and performing the - * flush locally only. This will force any subsequent cpus which run - * this task to flush the context from the local tlb if the process - * migrates to another cpu (again). - * - * 3) For shared address spaces (threads) and swapping we bite the - * bullet for most cases and perform the cross call (but only to - * the cpus listed in cpu_vm_mask). - * - * The performance gain from "optimizing" away the cross call for threads is - * questionable (in theory the big win for threads is the massive sharing of - * address space state across processors). + * mm->cpu_vm_mask is a bit mask of which cpus an address + * space has (potentially) executed on, this is the heuristic + * we use to limit cross calls. */ /* This currently is only used by the hugetlb arch pre-fault @@ -1080,18 +1051,13 @@ void smp_fetch_global_pmu(void) void smp_flush_tlb_mm(struct mm_struct *mm) { u32 ctx = CTX_HWBITS(mm->context); - int cpu = get_cpu(); - if (atomic_read(&mm->mm_users) == 1) { - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - goto local_flush_and_out; - } + get_cpu(); smp_cross_call_masked(&xcall_flush_tlb_mm, ctx, 0, 0, mm_cpumask(mm)); -local_flush_and_out: __flush_tlb_mm(ctx, SECONDARY_CONTEXT); put_cpu(); @@ -1114,17 +1080,15 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long { u32 ctx = CTX_HWBITS(mm->context); struct tlb_pending_info info; - int cpu = get_cpu(); + + get_cpu(); info.ctx = ctx; info.nr = nr; info.vaddrs = vaddrs; - if (mm == current->mm && atomic_read(&mm->mm_users) == 1) - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - else - smp_call_function_many(mm_cpumask(mm), tlb_pending_func, - &info, 1); + smp_call_function_many(mm_cpumask(mm), tlb_pending_func, + &info, 1); __flush_tlb_pending(ctx, nr, vaddrs); @@ -1134,14 +1098,13 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long void smp_flush_tlb_page(struct mm_struct *mm, unsigned long vaddr) { unsigned long context = CTX_HWBITS(mm->context); - int cpu = get_cpu(); - if (mm == current->mm && atomic_read(&mm->mm_users) == 1) - cpumask_copy(mm_cpumask(mm), cpumask_of(cpu)); - else - smp_cross_call_masked(&xcall_flush_tlb_page, - context, vaddr, 0, - mm_cpumask(mm)); + get_cpu(); + + smp_cross_call_masked(&xcall_flush_tlb_page, + context, vaddr, 0, + mm_cpumask(mm)); + __flush_tlb_page(context, vaddr); put_cpu(); -- 2.25.1