Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750949AbdIJEmf (ORCPT ); Sun, 10 Sep 2017 00:42:35 -0400 Received: from mail.kernel.org ([198.145.29.99]:54114 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750819AbdIJEme (ORCPT ); Sun, 10 Sep 2017 00:42:34 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FC2221E9D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: AOwi7QBMjl6IWnGSVX9JlXoGxqqZ+oHj4QykHFqd2Fx40+jPvwyNozVeHjHRB5qyigb0BaKRln5y/UhfV6Bxj92SSWQ= MIME-Version: 1.0 In-Reply-To: <20170909193750.l5o5xtquogmscmom@pd.tnic> References: <20170909163225.GA290@x4> <20170909170537.6xmxtzwripplhhwi@pd.tnic> <20170909172352.GA290@x4> <20170909173633.4ttfk7maooxkcwum@pd.tnic> <20170909181445.GA281@x4> <20170909182952.itqad4ryngjwrgqf@pd.tnic> <20170909190948.xydyega7i2rjnlqt@pd.tnic> <20170909193750.l5o5xtquogmscmom@pd.tnic> From: Andy Lutomirski Date: Sat, 9 Sep 2017 21:42:12 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: Current mainline git (24e700e291d52bd2) hangs when building e.g. perf To: Borislav Petkov , Peter Zijlstra Cc: Andy Lutomirski , Linus Torvalds , Markus Trippelsdorf , Ingo Molnar , Thomas Gleixner , LKML , Ingo Molnar , Tom Lendacky , Rik van Riel Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3148 Lines: 60 On Sat, Sep 9, 2017 at 12:37 PM, Borislav Petkov wrote: > On Sat, Sep 09, 2017 at 12:28:30PM -0700, Andy Lutomirski wrote: >> I propose the following fix. If PCID is on, then, in >> enter_lazy_tlb(), we switch to init_mm with the no-flush flag set. >> (And we give init_mm its own dedicated ASID to keep it simple and fast >> -- no need to use the LRU ASID mapping to assign one dynamically.) We >> clear the bit in mm_cpumask. That is, we more or less just skip the >> whole lazy TLB optimization and rely on PCID CPUs having reasonably >> fast CR3 writes. No extra IPIs. I suppose I need to benchmark this. >> It will certainly slow down workloads that rapidly toggle between a >> user thread and a kernel thread because it forces serialization on >> each mm switch, but maybe that's not so bad. > > Sounds ok so far. > >> If PCID is off, then we leave the old CR3 value when we go lazy, and >> we also leave the flag in mm_cpumask set. When a flush is requested, >> we send out the IPI and switch to init_mm (and flush because we have >> no choice). IOW, the no-PCID behavior goes back to what it used to >> be. > > Ok, question: why can't we load the new CR3 value too, immediately? Or > are we saying, we might get to return to the same CR3 we had before we > were lazy so we won't need to do an unnecessary CR3 write with the same > value. A microoptimization, if you will. It is indeed a microoptimization, but it's a microoptimization that we've had in the kernel for a long, long time. But it may be an ill-advised microoptimization, or at least a poorly implemented one historically. The microoptimization mostly affects workloads that have a process on an otherwise idle CPU that frequently sleeps for very short times. With the optimization, we avoid two TLB flushes and two serializing instructions every time we sleep. Historically, we got a bunch of useless IPIs, too, depending on the workload. The problem is that the implementation, which lives in kernel/sched/core.c for the most part, involves some extra reference counting, and there are NUMA workloads with many cores all running the same mm that pay a *huge* cost in refcounting, since all the CPUs are hammering the same refcount. And this refcount is (I think) basically pointless on x86 and maybe on most architectures. PeterZ and Ingo, would you be okay with adding a define so arches can opt out of the task_struct::active_mm field entirely? That is, with the option set, task_struct wouldn't have an active_mm field, the core wouldn't call mmgrab and mmdrop, and the arch would be responsible for that bookkeeping instead? x86, and presumably all arches without cross-core invalidation, would probably prefer to just shoot down the old mm entirely in __mmput() rather than trying to figure out when do finish freeing old mms. After all, exit_mmap() is going to send an IPI regardless, so I see no reason to have the scheduler core pin an old dead mm just because some random kernel thread's active_mm field points to it. IOW, if I'm going to reintroduce something like what the old lazy mode did on x86, I'd rather do it right. --Andy