Received: by 2002:a4a:be92:0:0:0:0:0 with SMTP id o18csp3117784oop; Mon, 18 Feb 2019 18:38:14 -0800 (PST) X-Google-Smtp-Source: AHgI3IYOT33jb/qfWqAwVnVdgfffXbhMl6nKX7VgGSvgultAagM0ZbVXtriAuyBGAO+SBrh0XCiA X-Received: by 2002:a17:902:7590:: with SMTP id j16mr26288185pll.304.1550543893957; Mon, 18 Feb 2019 18:38:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550543893; cv=none; d=google.com; s=arc-20160816; b=TUCVwUuGxrPUcQt/HP3SabVJToO+Ad360AU9TLAbSFpejXfW5nfl+Wok4zAgK2YKmY uBpqA6G7IOcxzV3tr7D1WJnM7dKwHCcOkf8Whktvqq9Z9XHyJUmcNTQ5LisePWfwxHRh QZudrvvVH4DI5XBBNa4LjuwUR4oV596aDh5c5iZIFsOUzpD2koupTJTPXvfLA5bsLCAG xCjgJR5ahyOTJDkVxRhXx4quLKGBc0UbuolkIdcPxombPF6SZ46tiLVFnq0b/t3yi5jU i3wR7r4YGcA39Hs/hfRi12nDWXYT5k+fBaP9szcO92vJpBJGoHd9KB9tcGy3+Z9AgND7 T7pQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=spMCM/G9VYmAjiUWsFUepnaAhl87A4+dAGKanL7DArY=; b=fR05y4WsplMAoP28LaFfWxseaHfxU2Rf1EeFUXPJcXBZZNB84dU6seM8NQgygdfyn5 sX6RwBnqTaMYliHuIzMoYTEn0MqUlrt/45tK6vQRKHwI69FEBSVt3KZotPSemTIQDxJN Rd3B7rjR/cnplWgnoYSVCQZ5AQxz3tg9f5jYwcFr2FEXZzNfBXFzL96OijjkUScAIAd5 PCmTJP4s15C/2fHPCy3Q87LYxzi0cQA7HfqhmG6Z2KORSc9zSeUjVzxO/Y5cgcYesffs zzE3akcgYK7VaFO+dxHWvi29BlC+Vws88sRwSYvc18cGp+khLxmUHD7YiQyvnq1QYIWT YOPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u11si9842956plr.227.2019.02.18.18.37.58; Mon, 18 Feb 2019 18:38:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731039AbfBRVzn (ORCPT + 99 others); Mon, 18 Feb 2019 16:55:43 -0500 Received: from 216-12-86-13.cv.mvl.ntelos.net ([216.12.86.13]:59942 "EHLO brightrain.aerifal.cx" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726955AbfBRVzm (ORCPT ); Mon, 18 Feb 2019 16:55:42 -0500 Received: from dalias by brightrain.aerifal.cx with local (Exim 3.15 #2) id 1gvqss-0007bv-00; Mon, 18 Feb 2019 21:55:30 +0000 Date: Mon, 18 Feb 2019 16:55:30 -0500 From: Rich Felker To: Mathieu Desnoyers Cc: linux-kernel , "Paul E. McKenney" , Peter Zijlstra , Ingo Molnar , Alexander Viro , Thomas Gleixner , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman Subject: Re: Regression in SYS_membarrier expedited Message-ID: <20190218215530.GJ23599@brightrain.aerifal.cx> References: <20190217184800.GA16118@brightrain.aerifal.cx> <53623603.9626.1550439285362.JavaMail.zimbra@efficios.com> <20190217215235.GH23599@brightrain.aerifal.cx> <20190217220805.GI23599@brightrain.aerifal.cx> <424503257.251.1550503352008.JavaMail.zimbra@efficios.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <424503257.251.1550503352008.JavaMail.zimbra@efficios.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Feb 18, 2019 at 10:22:32AM -0500, Mathieu Desnoyers wrote: > ----- On Feb 17, 2019, at 5:08 PM, Rich Felker dalias@libc.org wrote: > > > On Sun, Feb 17, 2019 at 04:52:35PM -0500, Rich Felker wrote: > >> On Sun, Feb 17, 2019 at 04:34:45PM -0500, Mathieu Desnoyers wrote: > >> > ----- On Feb 17, 2019, at 1:48 PM, Rich Felker dalias@libc.org wrote: > >> > > >> > > commit a961e40917fb14614d368d8bc9782ca4d6a8cd11 made it so that the > >> > > MEMBARRIER_CMD_PRIVATE_EXPEDITED command cannot be used without first > >> > > registering intent to use it. However, registration is an expensive > >> > > operation since commit 3ccfebedd8cf54e291c809c838d8ad5cc00f5688, which > >> > > added synchronize_sched() to it; this means it's no longer possible to > >> > > lazily register intent at first use, and it's unreasonably expensive > >> > > to preemptively register intent for possibly extremely-short-lived > >> > > processes that will never use it. (My usage case is in libc (musl), > >> > > where I can't know if the process will be short- or long-lived; > >> > > unnecessary and potentially expensive syscalls can't be made > >> > > preemptively, only lazily at first use.) > >> > > > >> > > Can we restore the functionality of MEMBARRIER_CMD_PRIVATE_EXPEDITED > >> > > to work even without registration? The motivation of requiring > >> > > registration seems to be: > >> > > > >> > > "Registering at this time removes the need to interrupt each and > >> > > every thread in that process at the first expedited > >> > > sys_membarrier() system call." > >> > > > >> > > but interrupting every thread in the process is exactly what I expect, > >> > > and is not a problem. What does seem like a big problem is waiting for > >> > > synchronize_sched() to synchronize with an unboundedly large number of > >> > > cores (vs only a few threads in the process), especially in the > >> > > presence of full_nohz, where it seems like latency would be at least a > >> > > few ms and possibly unbounded. > >> > > > >> > > Short of a working SYS_membarrier that doesn't require expensive > >> > > pre-registration, I'm stuck just implementing it in userspace with > >> > > signals... > >> > > >> > Hi Rich, > >> > > >> > Let me try to understand the scenario first. > >> > > >> > musl libc support for using membarrier private expedited > >> > would require to first register membarrier private expedited for > >> > the process at musl library init (typically after exec). At that stage, the > >> > process is still single-threaded, right ? So there is no reason > >> > to issue a synchronize_sched() (or now synchronize_rcu() in newer > >> > kernels): > >> > > >> > membarrier_register_private_expedited() > >> > > >> > if (!(atomic_read(&mm->mm_users) == 1 && get_nr_threads(p) == 1)) { > >> > /* > >> > * Ensure all future scheduler executions will observe the > >> > * new thread flag state for this process. > >> > */ > >> > synchronize_rcu(); > >> > } > >> > > >> > So considering that pre-registration carefully done before the process > >> > becomes multi-threaded just costs a system call (and not a synchronize_sched()), > >> > does it make the pre-registration approach more acceptable ? > >> > >> It does get rid of the extreme cost, but I don't think it would be > >> well-received by users who don't like random unnecessary syscalls at > >> init time (each adding a few us of startup time cost). If it's so > >> cheap, why isn't it just the default at kernel-side process creation? > >> Why is there any requirement of registration to begin with? Reading > >> the code, it looks like all it does is set a flag, and all this flag > >> is used for is erroring-out if it's not set. > > > > On further thought, pre-registration could be done at first > > pthread_create rather than process entry, which would probably be > > acceptable. But the question remains why it's needed at all, and > > neither of these approaches is available to code that doesn't have the > > privilege of being part of libc. For example, library code that might > > be loaded via dlopen can't safely use SYS_membarrier without > > introducing unbounded latency before the first use. > > For membarrier private expedited, the need for pre-registration is currently > there because of powerpc not wanting to slow down switch_mm() for processes > not needing that command. > > That's the only reason I see for it. If we would have accepted to add > a smp_mb() to the powerpc switch_mm() scheduler path, we could have done > so without registration for the private expedited membarrier command. I don't understand why the barrier is needed at all; the ipi ping should suffice to execute a barrier instruction on all cores on which a thread of the process is running, and if any other core subsequently picks up a thread of the process to run, it must necessarily perform a barrier just to synchronize with whatever core the thread was previously running on (not to mention synchronizing the handoff itself). Is this just to optimize out ipi pinging cores that threads of the process are not currently running on, but were last running on and could become running on again without migration? > commit a961e40917fb hints at the sync_core private expedited membarrier > commands (which was being actively designed at that time) which may > require pre-registration. However, things did not turn out that way: we > ended up adding the required core serializing barriers unconditionally > into each architecture. > > Considering that sync_core private expedited membarrier ended up not needing > pre-registration, I think this pre-registration optimization may have been > somewhat counter-productive, since I doubt the overhead of smp_mb() in a > switch_mm() for powerpc is that high, but I don't have the hardware handy > to provide numbers. So we end up slowing down everyone by requiring a > registration system call after exec. :-( I'm surprised it's even possible to do switch_mm correctly with no barrier... > One possible way out of this would be to make MEMBARRIER_CMD_PRIVATE_EXPEDITED > and MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE work fine without pre-registration > in future kernels. Therefore, the application could try using them without > registration. If it works, all is fine, else it would treat the error how it > sees fit, either through explicit registration and trying again, or returning > the error to the caller. This would be nice. I think the register-at-first-pthread_create works fine for my case now, but as noted it doesn't work when you can't control libc internals. > The only change I see we would require to make this work is to turn > arch/powerpc/include/asm/membarrier.h membarrier_arch_switch_mm() into > an unconditional smp_mb(). I finally found this just now; before I was mistakenly grepping for MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY rather than MEMBARRIER_STATE_PRIVATE_EXPEDITED, and couldn't find anywhere it was actually used. > Thoughts ? I'm all for making it work without pre-registration. I don't care whether this is via heavier barriers in ppc or via pinging more cores, or some other mechanism, but maybe others have opinions on this. One alternative might be auto-registration, ipi pinging all cores the process has threads on (even not currently running) for the first membarrier, so that they see the membarrier_state updated without having to synchronize with the rcu/scheduler, then doing the same as now after the first call. I'm not sure if this creates opportunities for abuse; probably no worse than the process could do just by actually running lots of threads. Rich