Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751882AbbGBHrc (ORCPT ); Thu, 2 Jul 2015 03:47:32 -0400 Received: from mail-wi0-f179.google.com ([209.85.212.179]:35468 "EHLO mail-wi0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751918AbbGBHrY (ORCPT ); Thu, 2 Jul 2015 03:47:24 -0400 Date: Thu, 2 Jul 2015 09:47:19 +0200 From: Ingo Molnar To: "Paul E. McKenney" Cc: Peter Zijlstra , josh@joshtriplett.org, linux-kernel@vger.kernel.org, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com Subject: Re: [PATCH RFC tip/core/rcu 0/5] Expedited grace periods encouraging normal ones Message-ID: <20150702074719.GA27230@gmail.com> References: <20150630220014.GA10916@cloud> <20150630221224.GQ3717@linux.vnet.ibm.com> <20150630234633.GA11450@cloud> <20150701100939.GR19282@twins.programming.kicks-ass.net> <20150701105511.GN18673@twins.programming.kicks-ass.net> <20150701140031.GB3717@linux.vnet.ibm.com> <20150701141710.GG25159@twins.programming.kicks-ass.net> <20150701161705.GK3717@linux.vnet.ibm.com> <20150701170242.GL3644@twins.programming.kicks-ass.net> <20150701200936.GP3717@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150701200936.GP3717@linux.vnet.ibm.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2514 Lines: 54 * Paul E. McKenney wrote: > On Wed, Jul 01, 2015 at 07:02:42PM +0200, Peter Zijlstra wrote: > > On Wed, Jul 01, 2015 at 09:17:05AM -0700, Paul E. McKenney wrote: > > > On Wed, Jul 01, 2015 at 04:17:10PM +0200, Peter Zijlstra wrote: > > > > > > 74b51ee152b6 ("ACPI / osl: speedup grace period in acpi_os_map_cleanup") > > > > > > Really??? > > > > > > I am not concerned about this one. After all, one of the first things that > > > people do for OS-jitter-sensitive workloads is to get rid of binary blobs. > > > And any runtime use of ACPI as well. And let's face it, if your > > > latency-sensitive workload is using either binary blobs or ACPI, you have > > > already completely lost. Therefore, an additional expedited grace period > > > cannot possibly cause you to lose any more. > > > > This isn't solely about rt etc.. this call is a generic facility used by > > however many consumers. A normal workstation/server could run into it at > > relatively high frequency depending on its workload. > > > > Even on not latency sensitive workloads I think hammering all active CPUs is > > bad behaviour. Remember that a typical server class machine easily has more > > than 32 CPUs these days. > > Well, that certainly is one reason for the funnel locking, sequence counters, > etc., keeping the overhead bounded despite large numbers of CPUs. So I don't > believe that a non-RT/non-HPC workload is going to notice. So I think Peter's concern is that we should not be offering/promoting APIs that are easy to add, hard to remove/convert - especially if we _know_ they eventually have to be converted. That model does not scale, it piles up increasing amounts of crud. Also, there will be a threshold over which it will be increasingly harder to make hard-rt promises, because so much seemingly mundane functionality will be using these APIs. The big plus of -rt is that it's out of the box hard RT - if people are able to control their environment carefully they can use RTAI or so. I.e. it directly cuts into the usability of Linux in certain segments. Death by a thousand cuts and such. And it's not like it's that hard to stem the flow of algorithmic sloppiness at the source, right? Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/