Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751241AbdIEG5T (ORCPT ); Tue, 5 Sep 2017 02:57:19 -0400 Received: from mx2.suse.de ([195.135.220.15]:60602 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750704AbdIEG5S (ORCPT ); Tue, 5 Sep 2017 02:57:18 -0400 Subject: Re: [PATCH resend] x86,kvm: Add a kernel parameter to disable PV spinlock To: Juergen Gross , Davidlohr Bueso , Peter Zijlstra Cc: Ingo Molnar , Paolo Bonzini , "H . Peter Anvin" , Thomas Gleixner , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, Waiman Long References: <20170904142836.15446-1-osalvador@suse.de> <20170904144011.gp7hpis6usjehbuf@hirez.programming.kicks-ass.net> <20170904222157.GD17982@linux-80c1.suse> <0869e8a5-4abd-8f7f-0135-aab3e72e2d01@suse.com> From: Oscar Salvador Message-ID: Date: Tue, 5 Sep 2017 08:57:16 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <0869e8a5-4abd-8f7f-0135-aab3e72e2d01@suse.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1856 Lines: 42 On 09/05/2017 08:28 AM, Juergen Gross wrote: > On 05/09/17 00:21, Davidlohr Bueso wrote: >> On Mon, 04 Sep 2017, Peter Zijlstra wrote: >> >>> For testing its trivial to hack your kernel and I don't feel this is >>> something an Admin can make reasonable decisions about. >>> >>> So why? In general less knobs is better. >> +1. >> >> Also, note how b8fa70b51aa (xen, pvticketlocks: Add xen_nopvspin parameter >> to disable xen pv ticketlocks) has no justification as to why its wanted >> in the first place. The only thing I could find was from 15a3eac0784 >> (xen/spinlock: Document the xen_nopvspin parameter): >> >> "Useful for diagnosing issues and comparing benchmarks in over-commit >> CPU scenarios." > Hmm, I think I should clarify the Xen knob, as I was the one requesting > it: > > In my previous employment we had a configuration where dom0 ran > exclusively on a dedicated set of physical cpus. We experienced > scalability problems when doing I/O performance tests: with a decent > number of dom0 cpus we achieved throughput of 700 MB/s with only 20% > cpu load in dom0. A higher dom0 cpu count let the throughput drop to > about 150 MB/s and cpu load was up to 100%. Reason was the additional > load due to hypervisor interactions on a high frequency lock. > > So in special configurations at least for Xen the knob is useful for > production environment. It may be that the original patch was just to keep consistency between Xen and KVM, and also only for testing purposes. But we find a case when a customer of ours is running some workloads with 1<->1 mapping between physical cores and virtual cores, and we realized that with the pv spinlocks disabled there is a 4-5% of performance gain. A perf analysis showed that the application was very lock intensive with a lot of time spent in __raw_callee_save___pv_queued_spin_unlock.