Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp550976pxk; Thu, 24 Sep 2020 12:00:14 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyufefU6NsGe7HbaWVg8r1k0o8iCi7oJjFB7bdGAAp1gda3u36teFhvYlg/jW5c155DxKHK X-Received: by 2002:a17:907:948b:: with SMTP id dm11mr92442ejc.94.1600974014200; Thu, 24 Sep 2020 12:00:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600974014; cv=none; d=google.com; s=arc-20160816; b=BmccdBhJeo4jUb8w8v07k+1jsnhh0tNbuCY8tuf4MpqkgYPyLugzo6gS843avxjWqe RRtCNJ8ZhlE7Hp2eaCijY4xBn9Ve7mQBGmfKLQb1ORhfmF8msWzwSjMGenlO15is11s1 FR6FafLgOjFtv/5wVwl/qhxNNzoVx8/D+FdX2JyfPqR1IzNxuyJ2dtIJOF3UoQOgifK9 1/H6m0IWX4F9VVPEw6hvMJn8kjFcBy7I40ai25u2n7ZlZITf83MN4B87hCuMhGe0vNnw yMTTV1LisWXMb5QjKLYyNcNbVdgBge1nnCP9NCdPOlLZjiElfDbj+FIZXkbAr7j3IiYA y4rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=h4cg1obF7V/PVrDr8RKbIcZ9ocRKup9KLlZ9zoSY2Xo=; b=T4VJyVXSo+hWCUPAoh3M9atFCKeP9t7CwSh1zqHweQmlwxPYpxiZpGjTAeIw9AwqEi Mp3kY1kXiUyq7ALMLduVtA01Wcpectgx5JTWJO45ajUM8TNUQsjx+X7y9bq9J8S5XWFA +jLLKXpTcg0OXRt+/Nox+c4SG+PiFJvXrdCZIg0H3cblHj2L9u6e7CLhFdEKckVTH0df +sqi/VmNnFOBlIYPzIqezkGXUVezzPuVrpF3sn99ESXDe6iHigbBC0+VB0q3BYUz6lit jcN9XA/sauU03mTC7qmxgTOoiD1HMzt6JGhgh9indFetZ3jdTlgGnMoqLvBR5BbNpMXE IXGQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id mf8si299670ejb.197.2020.09.24.11.59.46; Thu, 24 Sep 2020 12:00:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728753AbgIXS6S (ORCPT + 99 others); Thu, 24 Sep 2020 14:58:18 -0400 Received: from mail.kernel.org ([198.145.29.99]:56782 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726831AbgIXS6S (ORCPT ); Thu, 24 Sep 2020 14:58:18 -0400 Received: from rorschach.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8D88323600; Thu, 24 Sep 2020 18:58:12 +0000 (UTC) Date: Thu, 24 Sep 2020 14:58:10 -0400 From: Steven Rostedt To: Thomas Gleixner Cc: peterz@infradead.org, Linus Torvalds , LKML , linux-arch , Paul McKenney , the arch/x86 maintainers , Sebastian Andrzej Siewior , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Andrew Morton , Linux-MM , Russell King , Linux ARM , Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , intel-gfx , dri-devel , Ard Biesheuvel , Herbert Xu , Vineet Gupta , "open list\:SYNOPSYS ARC ARCHITECTURE" , Arnd Bergmann , Guo Ren , linux-csky@vger.kernel.org, Michal Simek , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev , "David S. Miller" , linux-sparc Subject: Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends Message-ID: <20200924145810.2f0b806f@rorschach.local.home> In-Reply-To: <875z8383gh.fsf@nanos.tec.linutronix.de> References: <20200919091751.011116649@linutronix.de> <87mu1lc5mp.fsf@nanos.tec.linutronix.de> <87k0wode9a.fsf@nanos.tec.linutronix.de> <87eemwcpnq.fsf@nanos.tec.linutronix.de> <87a6xjd1dw.fsf@nanos.tec.linutronix.de> <87sgbbaq0y.fsf@nanos.tec.linutronix.de> <20200923084032.GU1362448@hirez.programming.kicks-ass.net> <20200923115251.7cc63a7e@oasis.local.home> <874kno9pr9.fsf@nanos.tec.linutronix.de> <20200923171234.0001402d@oasis.local.home> <871riracgf.fsf@nanos.tec.linutronix.de> <20200924083241.314f2102@gandalf.local.home> <875z8383gh.fsf@nanos.tec.linutronix.de> X-Mailer: Claws Mail 3.17.4git76 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 24 Sep 2020 19:55:10 +0200 Thomas Gleixner wrote: > On Thu, Sep 24 2020 at 08:32, Steven Rostedt wrote: > > On Thu, 24 Sep 2020 08:57:52 +0200 > > Thomas Gleixner wrote: > > > >> > Now as for migration disabled nesting, at least now we would have > >> > groupings of this, and perhaps the theorists can handle that. I mean, > >> > how is this much different that having a bunch of tasks blocked on a > >> > mutex with the owner is pinned on a CPU? > >> > > >> > migrate_disable() is a BKL of pinning affinity. > >> > >> No. That's just wrong. preempt disable is a concurrency control, > > > > I think you totally misunderstood what I was saying. The above wasn't about > > comparing preempt_disable to migrate_disable. It was comparing > > migrate_disable to a chain of tasks blocked on mutexes where the top owner > > has preempt_disable set. You still have a bunch of tasks that can't move to > > other CPUs. > > What? The top owner does not prevent any task from moving. The tasks > cannot move because they are blocked on the mutex, which means they are > not runnable and non runnable tasks are not migrated at all. And neither are migrated disabled tasks preempted by a high priority task. > > I really don't understand what you are trying to say. Don't worry about it. I was just making a high level comparison of how migrate disabled tasks blocked on a higher priority task is similar to that of tasks blocked on a mutex held by a pinned task that is preempted by a high priority task. But we can forget this analogy as it's not appropriate for the current conversation. > > >> > If we only have local_lock() available (even on !RT), then it makes > >> > the blocking in groups. At least this way you could grep for all the > >> > different local_locks in the system and plug that into the algorithm > >> > for WCS, just like one would with a bunch of mutexes. > >> > >> You cannot do that on RT at all where migrate disable is substituting > >> preempt disable in spin and rw locks. The result would be the same as > >> with a !RT kernel just with horribly bad performance. > > > > Note, the spin and rwlocks already have a lock associated with them. Why > > would it be any different on RT? I wasn't suggesting adding another lock > > inside a spinlock. Why would I recommend THAT? I wasn't recommending > > blindly replacing migrate_disable() with local_lock(). I just meant expose > > local_lock() but not migrate_disable(). > > We already exposed local_lock() to non RT and it's for places which do > preempt_disable() or local_irq_disable() without having a lock > associated. But both primitives are scope less and therefore behave like > CPU local BKLs. What local_lock() provides in these cases is: > > - Making the protection scope clear by associating a named local > lock which is coverred by lockdep. > > - It still maps to preempt_disable() or local_irq_disable() in !RT > kernels > > - The scope and the named lock allows RT kernels to substitute with > real (recursion aware) locking primitives which keep preemption and > interupts enabled, but provide the fine grained protection for the > scoped critical section. I'm very much aware of the above. > > So how would you substitute migrate_disable() with a local_lock()? You > can't. Again migrate_disable() is NOT a concurrency control and > therefore it cannot be substituted by any concurrency control primitive. When I was first writing my email, I was writing about a way to replace migrate_disable with a construct similar to local locks without actually mentioning local locks, but then rewrote it to state local locks, trying to simplify what I was writing. I shouldn't have done that, because it portrayed that I wanted to use local_lock() unmodified. I was actually thinking of a new construct that was similar but not exactly the same as local lock. But this will just make things more complex and we can forget about it. I'll wait to see what Peter produces. -- Steve