Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751696AbdHBAph (ORCPT ); Tue, 1 Aug 2017 20:45:37 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:33178 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751021AbdHBApg (ORCPT ); Tue, 1 Aug 2017 20:45:36 -0400 Date: Wed, 2 Aug 2017 10:45:13 +1000 From: Nicholas Piggin To: "Paul E. McKenney" Cc: Peter Zijlstra , Mathieu Desnoyers , Michael Ellerman , linux-kernel , Boqun Feng , Andrew Hunter , maged michael , gromer , Avi Kivity , Benjamin Herrenschmidt , Palmer Dabbelt , Dave Watson Subject: Re: [RFC PATCH v2] membarrier: expedited private command Message-ID: <20170802104513.4337e528@roar.ozlabs.ibm.com> In-Reply-To: <20170801233203.GO3730@linux.vnet.ibm.com> References: <20170728115702.5vgnvwhmbbmyrxbf@hirez.programming.kicks-ass.net> <87tw1s4u9w.fsf@concordia.ellerman.id.au> <20170731233731.32e68f6d@roar.ozlabs.ibm.com> <973223324.694.1501551189603.JavaMail.zimbra@efficios.com> <20170801120047.61c59064@roar.ozlabs.ibm.com> <20170801081230.GF6524@worktop.programming.kicks-ass.net> <20170801195717.7a675cc2@roar.ozlabs.ibm.com> <20170801102203.urldoripgbh2ohun@hirez.programming.kicks-ass.net> <20170801132309.GS3730@linux.vnet.ibm.com> <20170801141654.qredbzshlz47lfxy@hirez.programming.kicks-ass.net> <20170801233203.GO3730@linux.vnet.ibm.com> Organization: IBM X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1598 Lines: 35 On Tue, 1 Aug 2017 16:32:03 -0700 "Paul E. McKenney" wrote: > On Tue, Aug 01, 2017 at 04:16:54PM +0200, Peter Zijlstra wrote: > > On Tue, Aug 01, 2017 at 06:23:09AM -0700, Paul E. McKenney wrote: > > > On Tue, Aug 01, 2017 at 12:22:03PM +0200, Peter Zijlstra wrote: > > > > > > [ . . . ] > > > > > > > As to scheduler IPIs, those are limited to the CPUs the user is limited > > > > to and are rate limited by the wakeup-latency of the tasks. After all, > > > > all the time a task is runnable but not running, wakeups are no-ops. > > > > > > Can't that wakeup-latency limitation be overcome by a normal user simply > > > by having lots of tasks to wake up, which then go back to sleep almost > > > immediately? Coupled with very a low-priority CPU-bound task on each CPU? > > > > Let me put it like this; there is no way to cause more interference > > using IPIs then there is simply running while(1) loops ;-) > > Very good, that does give us some guidance, give or take context switches > happening during the IPI latency window. ;-) I think we do have to be a bit careful. Peter's right when you're thinking of just running arbitrary tasks on a single CPU, but for multiple CPUs, the IPI sender will not necessarily get accounted the cost it incurs on the target CPU. So we do need to be careful about allowing a large amount of unprivileged IPIs to arbitrary CPUs. Fortunately in this case the IPIs are restricted to CPUs where our process is currently running. That's about the ideal case where we're only disturbing our own job. Thanks, Nick