Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751305AbdIEG6s (ORCPT ); Tue, 5 Sep 2017 02:58:48 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:37151 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750902AbdIEG6q (ORCPT ); Tue, 5 Sep 2017 02:58:46 -0400 Date: Tue, 5 Sep 2017 08:58:37 +0200 From: Peter Zijlstra To: Juergen Gross Cc: Davidlohr Bueso , Oscar Salvador , Ingo Molnar , Paolo Bonzini , "H . Peter Anvin" , Thomas Gleixner , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, Waiman Long Subject: Re: [PATCH resend] x86,kvm: Add a kernel parameter to disable PV spinlock Message-ID: <20170905065837.rs767a4os2aumg7h@hirez.programming.kicks-ass.net> References: <20170904142836.15446-1-osalvador@suse.de> <20170904144011.gp7hpis6usjehbuf@hirez.programming.kicks-ass.net> <20170904222157.GD17982@linux-80c1.suse> <0869e8a5-4abd-8f7f-0135-aab3e72e2d01@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0869e8a5-4abd-8f7f-0135-aab3e72e2d01@suse.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1778 Lines: 41 On Tue, Sep 05, 2017 at 08:28:10AM +0200, Juergen Gross wrote: > On 05/09/17 00:21, Davidlohr Bueso wrote: > > On Mon, 04 Sep 2017, Peter Zijlstra wrote: > > > >> For testing its trivial to hack your kernel and I don't feel this is > >> something an Admin can make reasonable decisions about. > >> > >> So why? In general less knobs is better. > > > > +1. > > > > Also, note how b8fa70b51aa (xen, pvticketlocks: Add xen_nopvspin parameter > > to disable xen pv ticketlocks) has no justification as to why its wanted > > in the first place. The only thing I could find was from 15a3eac0784 > > (xen/spinlock: Document the xen_nopvspin parameter): > > > > "Useful for diagnosing issues and comparing benchmarks in over-commit > > CPU scenarios." > > Hmm, I think I should clarify the Xen knob, as I was the one requesting > it: > > In my previous employment we had a configuration where dom0 ran > exclusively on a dedicated set of physical cpus. We experienced > scalability problems when doing I/O performance tests: with a decent > number of dom0 cpus we achieved throughput of 700 MB/s with only 20% > cpu load in dom0. A higher dom0 cpu count let the throughput drop to > about 150 MB/s and cpu load was up to 100%. Reason was the additional > load due to hypervisor interactions on a high frequency lock. > > So in special configurations at least for Xen the knob is useful for > production environment. So the problem with qspinlock is that it will revert to a classic test-and-set spinlock if you don't do paravirt but are running a HV. And test-and-set is unfair and has all kinds of ugly starvation cases, esp on slightly bigger hardware. So if we'd want to cater to the 1:1 virt case, we'll need to come up with something else. _IF_ it is an issue of course.