Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1009375pxj; Thu, 17 Jun 2021 20:03:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxpg7EMtpPn7QHESkFSHDWIdgFpBaATci+6UDpvoUf4jNnGNynOKQDw/K2yC30cmcAITx/F X-Received: by 2002:a17:906:8041:: with SMTP id x1mr8521025ejw.81.1623985436213; Thu, 17 Jun 2021 20:03:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623985436; cv=none; d=google.com; s=arc-20160816; b=McKEgewbe1vvqcZxkfX/iJRx0EvMjqN4cz74ErU1d3bzwfNgaGvXWJgFBmZwOkB3Xt AGf96TkBOcQ+03Svf/D0MgR7qYcEKzV4XhKJ+YBaVAb24mvcYg/CEzSfV172NmZ8h5Bm +G8/wkUmEN4FBCB4USx5nj6ZAGXYFdFVjo6VrHeD6v1B8IzLa9FXKmfOYrZT2K9RZoBO FXJGIlU6+wR3HtQ4KDbh1Y3vqEyd3E5Qmwl/1Dcy9cuXLDCCuPazpMIYQNjEyYRwXlwV tFxJ0jRBP1PDz3kdR59npWdfGcxNNN1MwEqstweUI6Tjcbw/45uqviLWGk7itOcGX9js RpIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=GobQqGHabOQqXYjbVWJJmf+RhWAHJrFQKMFI3SMQY30=; b=qFFRSy/Xqau87KUnLGjlNQYWLjqVZky0t3IkUv75eOwd+MbwEO9+w8CFgUOU3FcFHj sn2nDS7V9hbNmog+/Qpf52ffJzdT4MtuR841YbTa7MmeyNxd06+dJkx+Pppw5MDOAjlY Ey92eCoJoiGt25KCr/k5PZskLxVC0f690mMt9l/bVyTvYIfoc3QxnJ5X210ZO6uBQ1vh wlNdluZdK4RDK80cUNujfNUyCOaOxMOXJwxDS8LKV3mORdgGxGahQA5LSqDO5pniUX2b klD8LC54ruInpprExo0J0nb4R+1vIfOVG+bONWL5XKK3lUU+e7otQ1RS/YLsleuWKH+7 U2DA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QdUqaxaC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d10si1897000edn.529.2021.06.17.20.03.33; Thu, 17 Jun 2021 20:03:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QdUqaxaC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231667AbhFQXvj (ORCPT + 99 others); Thu, 17 Jun 2021 19:51:39 -0400 Received: from mail.kernel.org ([198.145.29.99]:54192 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229673AbhFQXvi (ORCPT ); Thu, 17 Jun 2021 19:51:38 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id D99E36128B; Thu, 17 Jun 2021 23:49:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623973770; bh=nSxwA4kUOitAUVHMWqIwKq28yMKy3VPBBPja5BVj3ac=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=QdUqaxaCNxKcwchVDO6u0pqLB7txf9m3HI6+iqYyd/t6cWk0Djt8524fn/h5xpJNn blqY7bla8WHR9zJlFkbR8v6C0ERlv5fzT+aePOzsYsOvmn2KGCJH5eMGaBjVUw4SK9 UFg5hPthTZtDuNiur4iNawgaE040CndKO1RiPfuDlQcQUj7fPCJeQuZl38Hih57Pkk 5NSYAM3iL5+XGb2UhZq+W7AMTLBSB64QbVAN/6BT2k8szd3Ga7pXoGCYXcFwnnaSPp JjVPKr/a/G6Iypa+jOU5UXfUnHbFrmvxrRJLI1dRnTHWtjGx8kisUvuIwY55jsTfES fY66yrkVTvMew== Subject: Re: [PATCH 4/8] membarrier: Make the post-switch-mm barrier explicit To: Nicholas Piggin , "Peter Zijlstra (Intel)" , Rik van Riel Cc: Andrew Morton , Dave Hansen , Linux Kernel Mailing List , linux-mm@kvack.org, Mathieu Desnoyers , "Paul E. McKenney" , the arch/x86 maintainers References: <1623816595.myt8wbkcar.astroid@bobo.none> <617cb897-58b1-8266-ecec-ef210832e927@kernel.org> <1623893358.bbty474jyy.astroid@bobo.none> <58b949fb-663e-4675-8592-25933a3e361c@www.fastmail.com> <1623911501.q97zemobmw.astroid@bobo.none> From: Andy Lutomirski Message-ID: <5efaca70-35a0-1ce5-98ff-651a5f153a0a@kernel.org> Date: Thu, 17 Jun 2021 16:49:29 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <1623911501.q97zemobmw.astroid@bobo.none> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 6/16/21 11:51 PM, Nicholas Piggin wrote: > Excerpts from Andy Lutomirski's message of June 17, 2021 3:32 pm: >> On Wed, Jun 16, 2021, at 7:57 PM, Andy Lutomirski wrote: >>> >>> >>> On Wed, Jun 16, 2021, at 6:37 PM, Nicholas Piggin wrote: >>>> Excerpts from Andy Lutomirski's message of June 17, 2021 4:41 am: >>>>> On 6/16/21 12:35 AM, Peter Zijlstra wrote: >>>>>> On Wed, Jun 16, 2021 at 02:19:49PM +1000, Nicholas Piggin wrote: >>>>>>> Excerpts from Andy Lutomirski's message of June 16, 2021 1:21 pm: >>>>>>>> membarrier() needs a barrier after any CPU changes mm. There is currently >>>>>>>> a comment explaining why this barrier probably exists in all cases. This >>>>>>>> is very fragile -- any change to the relevant parts of the scheduler >>>>>>>> might get rid of these barriers, and it's not really clear to me that >>>>>>>> the barrier actually exists in all necessary cases. >>>>>>> >>>>>>> The comments and barriers in the mmdrop() hunks? I don't see what is >>>>>>> fragile or maybe-buggy about this. The barrier definitely exists. >>>>>>> >>>>>>> And any change can change anything, that doesn't make it fragile. My >>>>>>> lazy tlb refcounting change avoids the mmdrop in some cases, but it >>>>>>> replaces it with smp_mb for example. >>>>>> >>>>>> I'm with Nick again, on this. You're adding extra barriers for no >>>>>> discernible reason, that's not generally encouraged, seeing how extra >>>>>> barriers is extra slow. >>>>>> >>>>>> Both mmdrop() itself, as well as the callsite have comments saying how >>>>>> membarrier relies on the implied barrier, what's fragile about that? >>>>>> >>>>> >>>>> My real motivation is that mmgrab() and mmdrop() don't actually need to >>>>> be full barriers. The current implementation has them being full >>>>> barriers, and the current implementation is quite slow. So let's try >>>>> that commit message again: >>>>> >>>>> membarrier() needs a barrier after any CPU changes mm. There is currently >>>>> a comment explaining why this barrier probably exists in all cases. The >>>>> logic is based on ensuring that the barrier exists on every control flow >>>>> path through the scheduler. It also relies on mmgrab() and mmdrop() being >>>>> full barriers. >>>>> >>>>> mmgrab() and mmdrop() would be better if they were not full barriers. As a >>>>> trivial optimization, mmgrab() could use a relaxed atomic and mmdrop() >>>>> could use a release on architectures that have these operations. >>>> >>>> I'm not against the idea, I've looked at something similar before (not >>>> for mmdrop but a different primitive). Also my lazy tlb shootdown series >>>> could possibly take advantage of this, I might cherry pick it and test >>>> performance :) >>>> >>>> I don't think it belongs in this series though. Should go together with >>>> something that takes advantage of it. >>> >>> I’m going to see if I can get hazard pointers into shape quickly. >> >> Here it is. Not even boot tested! >> >> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=sched/lazymm&id=ecc3992c36cb88087df9c537e2326efb51c95e31 >> >> Nick, I think you can accomplish much the same thing as your patch by: >> >> #define for_each_possible_lazymm_cpu while (false) > > I'm not sure what you mean? For powerpc, other CPUs can be using the mm > as lazy at this point. I must be missing something. What I mean is: if you want to shoot down lazies instead of doing the hazard pointer trick to track them, you could do: #define for_each_possible_lazymm_cpu while (false) which would promise to the core code that you don't have any lazies left by the time exit_mmap() is done. You might need a new hook in exit_mmap() depending on exactly how you implement the lazy shootdown.