Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2458799imm; Thu, 19 Jul 2018 22:00:22 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd3MekEAbccUZ8ar4i+Oo6yHrleqP6MTUPmVc7KB+GsbnbSrV5ghLgTkJkfG+bJhkkzZZy+ X-Received: by 2002:a63:ee4e:: with SMTP id n14-v6mr600777pgk.159.1532062821976; Thu, 19 Jul 2018 22:00:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532062821; cv=none; d=google.com; s=arc-20160816; b=p5jS/ThkoNeL83C5IC7ua9W+uOpyhahQoUK/gsgF4Aiw3cbM5QrNbhEgc/UhtU51Ng F84OBoyNp/KN1WqXZnvd1jvXT08LPJlvTUaxHWHuIPkOz29EC2zkoal12IU836b7s4ls I/augVNTjcqqMrUnajH7QS7JOa6AqQGR6j4LDgp6GndMW4GLL4/neoOoWHMyFmnaTDHg QfzzOhzsfOvyt+oOKN3X97DXyX4wWJaPyT+yjPG/ukR+0b9+mm2BP+S1M2CngpGKgXEO C0z+obtHXZRS9omw3gPlgCCa9aoJ3VMXvnkaKx56PbqmIuRin8n0WJ1B5hrtCQKNHqN5 PTEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:date:cc:to:from:subject:message-id :arc-authentication-results; bh=iTRaWdKiNRaeEbD9TNfiziublo7r9rb4bodlUDXpkXk=; b=kiWR1w/OXPFFIjM1VliBnE3dVke2ZPtzI/SrqbfFyEj8x0lI7rOtei9a9VhZWsrdQn +3AAMh8ZkX5C86HNi9iGXMs4fvyrglMLQHBgA0ckfbdkQyN7GZbdwaIhLuWvfzTuuz3T iLJyOdNwSUrkBixYcpML31/LiZ44CDqH9v7SLKUO0cYLmqhbOXbLlO+BVp4bisQ+i5iv u5OOzRv+ZiDLVzS5yqY0aolbyUe7Z8PY6C8XB0qjdEGZOydTzpLCS2E9vQAiedu4LARH CMqHkIE39TLS526S9w6eGd8PPzBWHtheRCmK3jBEj/KjLt51HEw19aO46wyAroql3HV6 LRyg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f5-v6si971427pga.340.2018.07.19.22.00.07; Thu, 19 Jul 2018 22:00:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727236AbeGTFpS (ORCPT + 99 others); Fri, 20 Jul 2018 01:45:18 -0400 Received: from gate.crashing.org ([63.228.1.57]:52113 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726951AbeGTFpR (ORCPT ); Fri, 20 Jul 2018 01:45:17 -0400 Received: from localhost (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.14.1) with ESMTP id w6K4vf1X014332; Thu, 19 Jul 2018 23:57:42 -0500 Message-ID: <3987274f49523b23971d0252141ae3f335d1f5ce.camel@kernel.crashing.org> Subject: Re: [PATCH 4/7] x86,tlb: make lazy TLB mode lazier From: Benjamin Herrenschmidt To: Andy Lutomirski , Rik van Riel , Peter Zijlstra , Vitaly Kuznetsov , Juergen Gross , Boris Ostrovsky , linux-arch , Will Deacon , Catalin Marinas , linux-s390@vger.kernel.org, linuxppc-dev Cc: LKML , X86 ML , Mike Galbraith , kernel-team , Ingo Molnar , Dave Hansen , Nick Piggin , "Aneesh Kumar K.V" Date: Fri, 20 Jul 2018 14:57:40 +1000 In-Reply-To: References: <20180716190337.26133-1-riel@surriel.com> <20180716190337.26133-5-riel@surriel.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.28.3 (3.28.3-1.fc28) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2018-07-19 at 10:04 -0700, Andy Lutomirski wrote: > On Thu, Jul 19, 2018 at 9:45 AM, Andy Lutomirski wrote: > > [I added PeterZ and Vitaly -- can you see any way in which this would > > break something obscure? I don't.] Added Nick and Aneesh. We do have HW remote flushes on powerpc. > > On Thu, Jul 19, 2018 at 7:14 AM, Rik van Riel wrote: > > > I guess we can skip both switch_ldt and load_mm_cr4 if real_prev equals > > > next? > > > > Yes, AFAICS. > > > > > > > > On to the lazy TLB mm_struct refcounting stuff :) > > > > > > > > > > > Which refcount? mm_users shouldn’t be hot, so I assume you’re talking about > > > > mm_count. My suggestion is to get rid of mm_count instead of trying to > > > > optimize it. > > > > > > > > > Do you have any suggestions on how? :) > > > > > > The TLB shootdown sent at __exit_mm time does not get rid of the > > > kernelthread->active_mm > > > pointer pointing at the mm that is exiting. > > > > > > > Ah, but that's conceptually very easy to fix. Add a #define like > > ARCH_NO_TASK_ACTIVE_MM. Then just get rid of active_mm if that > > #define is set. After some grepping, there are very few users. The > > only nontrivial ones are the ones in kernel/ and mm/mmu_context.c that > > are involved in the rather complicated dance of refcounting active_mm. > > If that field goes away, it doesn't need to be refcounted. Instead, I > > think the refcounting can get replaced with something like: > > > > /* > > * Release any arch-internal references to mm. Only called when > > mm_users is zero > > * and all tasks using mm have either been switch_mm()'d away or have had > > * enter_lazy_tlb() called. > > */ > > extern void arch_shoot_down_dead_mm(struct mm_struct *mm); > > > > which the kernel calls in __mmput() after tearing down all the page > > tables. The body can be something like: > > > > if (WARN_ON(cpumask_any_but(mm_cpumask(...), ...)) { > > /* send an IPI. Maybe just call tlb_flush_remove_tables() */ > > } > > > > (You'll also have to fix up the highly questionable users in > > arch/x86/platform/efi/efi_64.c, but that's easy.) > > > > Does all that make sense? Basically, as I understand it, the > > expensive atomic ops you're seeing are all pointless because they're > > enabling an optimization that hasn't actually worked for a long time, > > if ever. > > Hmm. Xen PV has a big hack in xen_exit_mmap(), which is called from > arch_exit_mmap(), I think. It's a heavier weight version of more or > less the same thing that arch_shoot_down_dead_mm() would be, except > that it happens before exit_mmap(). But maybe Xen actually has the > right idea. In other words, rather doing the big pagetable free in > exit_mmap() while there may still be other CPUs pointing at the page > tables, the other order might make more sense. So maybe, if > ARCH_NO_TASK_ACTIVE_MM is set, arch_exit_mmap() should be responsible > for getting rid of all secret arch references to the mm. > > Hmm. ARCH_FREE_UNUSED_MM_IMMEDIATELY might be a better name. > > I added some more arch maintainers. The idea here is that, on x86 at > least, task->active_mm and all its refcounting is pure overhead. When > a process exits, __mmput() gets called, but the core kernel has a > longstanding "optimization" in which other tasks (kernel threads and > idle tasks) may have ->active_mm pointing at this mm. This is nasty, > complicated, and hurts performance on large systems, since it requires > extra atomic operations whenever a CPU switches between real users > threads and idle/kernel threads. > > It's also almost completely worthless on x86 at least, since __mmput() > frees pagetables, and that operation *already* forces a remote TLB > flush, so we might as well zap all the active_mm references at the > same time. > > But arm64 has real HW remote flushes. Does arm64 actually benefit > from the active_mm optimization? What happens on arm64 when a process > exits? How about s390? I suspect that x390 has rather larger systems > than arm64, where the cost of the reference counting can be much > higher. > > (Also, Rik, x86 on Hyper-V has remote flushes, too. How does that > interact with your previous patch set?)