Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp5404899imm; Sun, 22 Jul 2018 21:51:50 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd1jS3Jiky+pwYvEp4VRM6Frie5IP6wBn73XyaKu4vlkTqNI+AhIHtccP4IgeWXUP8tCZuQ X-Received: by 2002:a17:902:780d:: with SMTP id p13-v6mr11172949pll.119.1532321510878; Sun, 22 Jul 2018 21:51:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532321510; cv=none; d=google.com; s=arc-20160816; b=zMXr8xxpU+l6Qk/iFjhAs8HPWK0bjseBcVvju80LWLqjKy3ZCw6BkY0Rmxo30GpLT4 yMRHzkiGmotfwvtsD6G3lG23Yf9CpdzZsKROtJ8q4BsdM5qIfxG9dsGBdEFvxjLSjlq1 0JQzJu6CPZRBbAGpUZ5aiCLmLxn8My1nqFNJ4anDOlgS4xdzQBitg2A/9m5Cj6sCXy8V c5afLDpOIWONIKcrqbFeyFnyhibFm8kZlQETDi7V05T02x6XQ88Jimcj2ym1VdV8VtWY TmINhd4kjPUXBtxhh8Lbh/J6fFfslnPmqbCUT0xynEU1FEorHUPf9zNB8P+JQdF6WzVH RjVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=s5doFb3w6oQY3tPz8XwBGIxmDiHi95PgXi47equMADI=; b=u+Uz1IreYoI5rthrz090iwLsFfZAC0LbF26CwjEKw6siP2IChwhxsb6oP18eqoyA2Y daHatF6Jm3PZl/0Xi2mkAhNqcsQ/mUPCXOy32InCGqktv5RrAUi999sr/oEqtDXm525k Qk2Nm6bbMTAKigLgv7SLVarlq/nT5jT/nKP/SKfUIqQ2HXWPLr3QIqjxd829AelBxjxT MEcKeGm7jKdieUPscUHEPedE1f9p1FZYB0WA1+FfhcOSLhrjGP6o3DjQlmovEbuUmq2a iMb81q7CqbtzqzxwvjmStYnZRKYYrSNgsFIvuRVjlLzhD/NLwj0dNYNg5Vq229t2PqIt rvAA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e20-v6si8612274pfm.177.2018.07.22.21.51.36; Sun, 22 Jul 2018 21:51:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387747AbeGWFuA (ORCPT + 99 others); Mon, 23 Jul 2018 01:50:00 -0400 Received: from mx2.suse.de ([195.135.220.15]:52494 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727765AbeGWFuA (ORCPT ); Mon, 23 Jul 2018 01:50:00 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 686ACAE9F; Mon, 23 Jul 2018 04:50:40 +0000 (UTC) Date: Sun, 22 Jul 2018 21:50:35 -0700 From: Davidlohr Bueso To: Wanpeng Li Cc: Waiman Long , Paolo Bonzini , Radim Krcmar , Boris Ostrovsky , Juergen Gross , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , the arch/x86 maintainers , xen-devel , LKML , Konrad Rzeszutek Wilk Subject: Re: [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU Message-ID: <20180723045035.h6jfsfdmgx55ljot@linux-r8p5> References: <1532036397-19449-1-git-send-email-longman@redhat.com> <20180719215456.5ho3udhfoqlkh75a@linux-r8p5> <00e98205-606a-a121-36c2-dedaeae1d0bb@redhat.com> <20180723044257.m7pjrnp7jjqggqij@linux-r8p5> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20180723044257.m7pjrnp7jjqggqij@linux-r8p5> User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 22 Jul 2018, Davidlohr Bueso wrote: >On Mon, 23 Jul 2018, Wanpeng Li wrote: > >>On Fri, 20 Jul 2018 at 06:03, Waiman Long wrote: >>> >>>On 07/19/2018 05:54 PM, Davidlohr Bueso wrote: >>>> On Thu, 19 Jul 2018, Waiman Long wrote: >>>> >>>>> On a VM with only 1 vCPU, the locking fast paths will always be >>>>> successful. In this case, there is no need to use the the PV qspinlock >>>>> code which has higher overhead on the unlock side than the native >>>>> qspinlock code. >>>>> >>>>> The xen_pvspin veriable is also turned off in this 1 vCPU case to > >s/veriable > variable > >>>>> eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu() >>>>> which is run after xen_init_spinlocks(). >>>> >>>> Wouldn't kvm also want this? >>>> >>>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >>>> index a37bda38d205..95aceb692010 100644 >>>> --- a/arch/x86/kernel/kvm.c >>>> +++ b/arch/x86/kernel/kvm.c >>>> @@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void) >>>> static void __init kvm_smp_prepare_cpus(unsigned int max_cpus) >>>> { >>>> native_smp_prepare_cpus(max_cpus); >>>> - if (kvm_para_has_hint(KVM_HINTS_REALTIME)) >>>> + if (num_possible_cpus() == 1 || >>>> + kvm_para_has_hint(KVM_HINTS_REALTIME)) >>>> static_branch_disable(&virt_spin_lock_key); >>>> } >>> >>>That doesn't really matter as the slowpath will never get executed in >>>the 1 vCPU case. > >How does this differ then from xen, then? I mean, same principle applies. > >> >>So this is not needed in kvm tree? >>https://git.kernel.org/pub/scm/virt/kvm/kvm.git/commit/?h=queue&id=3a792199004ec335346cc607d62600a399a7ee02 > >Hmm I would think that my patch would be more appropiate as it actually does >what the comment says. Both would be needed actually yes, but also disabling the virt_spin_lock_key would be more robust imo. Thanks, Davidlohr