Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp1929196imm; Thu, 19 Jul 2018 10:06:07 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe3d1EOEy/R2qlf57S2V6NflGbdw2Mp4KMtpVdmqHgXB4htuIaGRGJuPgZAur5LwfJawHso X-Received: by 2002:a17:902:7688:: with SMTP id m8-v6mr10889058pll.338.1532019967781; Thu, 19 Jul 2018 10:06:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532019967; cv=none; d=google.com; s=arc-20160816; b=o80qZERKYCzeaiMvAl4b/ymeOBKy4BC8aYU5HD6hfNV13BiRFf0BdLsn4zXqFbcZKv o9WxWBxLhulWTUeqi0qHnszNdxphN/4A6sGpTl98H9ltiLjJqv1mGavTGZqjLKRO92p0 JCECk6ooAKBM1bH6/iULK1KKEWdAo92GaQM9SVNsceadBtweoSLG/xznhED4YP3bbDeL nTjk85lKfYhIMJ45qnRAksgXVTH1KycHfWV2g6s66mnHmOfdooVqex9AuHeLlr9lfjYJ 3F3lOt/uqJTZMGxHVR+IwM+3leXt3Ef0qpykzPi+iHJbymZ+R6X+Wf9RpKMv38y0zLW/ RwzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:references:in-reply-to:mime-version :dkim-signature:arc-authentication-results; bh=42kahZu0hOUcWkSApdooU29B96X5H3aJZ84InxhngeE=; b=baBVbDwtBeI8rsnyjpWkkV0HS0iBDMRmmgzuI4T330q5n3sZlhE8OkyewCnGCztTgW ndxZG6b9sdFWc8Jo62YCVrPF8HuzXm+Ozi8sMNh7xE3SLjwDYky8ozwyTII4LKHwMhaw ODLXuEaqb2bGlQgGRMDSsT/w4MWVdmqfs/KlK6/jjBzmutkJdvhfFs9L+xOm2hMA8O+7 3CnfpFTY7XJbbaQyMEI5OlpGAqkeZan3iHqd0zg1KzWgQ+/ZnGV6iknawUfhr/uJfmrQ S6K4U7wgms6LmGJAM4eFFYsoo+wp0j5KKW5WRwfVNT3ud6fFVU4QGqDYyijivFK6bjjR UmdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=TGvfutM1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l33-v6si6082080pgm.350.2018.07.19.10.05.48; Thu, 19 Jul 2018 10:06:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=TGvfutM1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732029AbeGSRsh (ORCPT + 99 others); Thu, 19 Jul 2018 13:48:37 -0400 Received: from mail.kernel.org ([198.145.29.99]:56292 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731751AbeGSRsg (ORCPT ); Thu, 19 Jul 2018 13:48:36 -0400 Received: from mail-wm0-f45.google.com (mail-wm0-f45.google.com [74.125.82.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8614020863 for ; Thu, 19 Jul 2018 17:04:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1532019871; bh=isRaxYAJ57SdFobcZCBET4FBvb+xvB5jFMXHCLmYg3E=; h=In-Reply-To:References:From:Date:Subject:To:Cc:From; b=TGvfutM1/5D/em56jRJpdcMvtnwXpURJfot7qKoHwUwolJiCw2qQOG+YTAFmLLxcr FwzozxlJB3ZqSXCxqnsD/4Dt9JWbEKRMNkZB6vZLTRKPwOZOqfAUYGFJ/SQ9QtBMaR PYk3bTj7v/45SsVBSJIH5y23BEodKwzqCplyeadk= Received: by mail-wm0-f45.google.com with SMTP id v25-v6so6960710wmc.0 for ; Thu, 19 Jul 2018 10:04:31 -0700 (PDT) X-Gm-Message-State: AOUpUlGlAUHnkL0XMZg9xsMIlioW0dluOClTpAFFbRDhv15dFOD5XcrX 1cL+UvEkVDcPdeljQVXRG+qEGjXXCo8P/32KMS0irw== X-Received: by 2002:a1c:f30d:: with SMTP id q13-v6mr4445468wmq.36.1532019869973; Thu, 19 Jul 2018 10:04:29 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a1c:d548:0:0:0:0:0 with HTTP; Thu, 19 Jul 2018 10:04:09 -0700 (PDT) In-Reply-To: References: <20180716190337.26133-1-riel@surriel.com> <20180716190337.26133-5-riel@surriel.com> From: Andy Lutomirski Date: Thu, 19 Jul 2018 10:04:09 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 4/7] x86,tlb: make lazy TLB mode lazier To: Rik van Riel , Peter Zijlstra , Vitaly Kuznetsov , Juergen Gross , Boris Ostrovsky , linux-arch , Will Deacon , Catalin Marinas , linux-s390@vger.kernel.org, Benjamin Herrenschmidt , linuxppc-dev Cc: Andy Lutomirski , LKML , X86 ML , Mike Galbraith , kernel-team , Ingo Molnar , Dave Hansen Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 19, 2018 at 9:45 AM, Andy Lutomirski wrot= e: > [I added PeterZ and Vitaly -- can you see any way in which this would > break something obscure? I don't.] > > On Thu, Jul 19, 2018 at 7:14 AM, Rik van Riel wrote: >> I guess we can skip both switch_ldt and load_mm_cr4 if real_prev equals >> next? > > Yes, AFAICS. > >> >> On to the lazy TLB mm_struct refcounting stuff :) >> >>> >>> Which refcount? mm_users shouldn=E2=80=99t be hot, so I assume you=E2= =80=99re talking about >>> mm_count. My suggestion is to get rid of mm_count instead of trying to >>> optimize it. >> >> >> Do you have any suggestions on how? :) >> >> The TLB shootdown sent at __exit_mm time does not get rid of the >> kernelthread->active_mm >> pointer pointing at the mm that is exiting. >> > > Ah, but that's conceptually very easy to fix. Add a #define like > ARCH_NO_TASK_ACTIVE_MM. Then just get rid of active_mm if that > #define is set. After some grepping, there are very few users. The > only nontrivial ones are the ones in kernel/ and mm/mmu_context.c that > are involved in the rather complicated dance of refcounting active_mm. > If that field goes away, it doesn't need to be refcounted. Instead, I > think the refcounting can get replaced with something like: > > /* > * Release any arch-internal references to mm. Only called when > mm_users is zero > * and all tasks using mm have either been switch_mm()'d away or have had > * enter_lazy_tlb() called. > */ > extern void arch_shoot_down_dead_mm(struct mm_struct *mm); > > which the kernel calls in __mmput() after tearing down all the page > tables. The body can be something like: > > if (WARN_ON(cpumask_any_but(mm_cpumask(...), ...)) { > /* send an IPI. Maybe just call tlb_flush_remove_tables() */ > } > > (You'll also have to fix up the highly questionable users in > arch/x86/platform/efi/efi_64.c, but that's easy.) > > Does all that make sense? Basically, as I understand it, the > expensive atomic ops you're seeing are all pointless because they're > enabling an optimization that hasn't actually worked for a long time, > if ever. Hmm. Xen PV has a big hack in xen_exit_mmap(), which is called from arch_exit_mmap(), I think. It's a heavier weight version of more or less the same thing that arch_shoot_down_dead_mm() would be, except that it happens before exit_mmap(). But maybe Xen actually has the right idea. In other words, rather doing the big pagetable free in exit_mmap() while there may still be other CPUs pointing at the page tables, the other order might make more sense. So maybe, if ARCH_NO_TASK_ACTIVE_MM is set, arch_exit_mmap() should be responsible for getting rid of all secret arch references to the mm. Hmm. ARCH_FREE_UNUSED_MM_IMMEDIATELY might be a better name. I added some more arch maintainers. The idea here is that, on x86 at least, task->active_mm and all its refcounting is pure overhead. When a process exits, __mmput() gets called, but the core kernel has a longstanding "optimization" in which other tasks (kernel threads and idle tasks) may have ->active_mm pointing at this mm. This is nasty, complicated, and hurts performance on large systems, since it requires extra atomic operations whenever a CPU switches between real users threads and idle/kernel threads. It's also almost completely worthless on x86 at least, since __mmput() frees pagetables, and that operation *already* forces a remote TLB flush, so we might as well zap all the active_mm references at the same time. But arm64 has real HW remote flushes. Does arm64 actually benefit from the active_mm optimization? What happens on arm64 when a process exits? How about s390? I suspect that x390 has rather larger systems than arm64, where the cost of the reference counting can be much higher. (Also, Rik, x86 on Hyper-V has remote flushes, too. How does that interact with your previous patch set?)