On a VM with only 1 vCPU, the locking fast paths will always be
successful. In this case, there is no need to use the the PV qspinlock
code which has higher overhead on the unlock side than the native
qspinlock code.
Signed-off-by: Waiman Long <[email protected]>
---
arch/x86/xen/spinlock.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index cd97a62..38f47ae 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -130,7 +130,8 @@ void xen_uninit_lock_cpu(int cpu)
void __init xen_init_spinlocks(void)
{
- if (!xen_pvspin) {
+ /* Don't need to use pvqspinlock code if there is only 1 vCPU. */
+ if (!xen_pvspin || num_possible_cpus() == 1) {
printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
return;
}
--
1.8.3.1
On 07/19/2018 09:48 AM, Waiman Long wrote:
> On a VM with only 1 vCPU, the locking fast paths will always be
> successful. In this case, there is no need to use the the PV qspinlock
> code which has higher overhead on the unlock side than the native
> qspinlock code.
>
> Signed-off-by: Waiman Long <[email protected]>
> ---
> arch/x86/xen/spinlock.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index cd97a62..38f47ae 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -130,7 +130,8 @@ void xen_uninit_lock_cpu(int cpu)
> void __init xen_init_spinlocks(void)
> {
>
> - if (!xen_pvspin) {
> + /* Don't need to use pvqspinlock code if there is only 1 vCPU. */
> + if (!xen_pvspin || num_possible_cpus() == 1) {
> printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
> return;
> }
I think we need to set xen_pvspin to false for such configurations.
Notice that xen_init_lock_cpu() will try to perform some additional
pvspinlock initializations.
-boris
On 07/19/2018 03:18 PM, Boris Ostrovsky wrote:
> On 07/19/2018 09:48 AM, Waiman Long wrote:
>> On a VM with only 1 vCPU, the locking fast paths will always be
>> successful. In this case, there is no need to use the the PV qspinlock
>> code which has higher overhead on the unlock side than the native
>> qspinlock code.
>>
>> Signed-off-by: Waiman Long <[email protected]>
>> ---
>> arch/x86/xen/spinlock.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
>> index cd97a62..38f47ae 100644
>> --- a/arch/x86/xen/spinlock.c
>> +++ b/arch/x86/xen/spinlock.c
>> @@ -130,7 +130,8 @@ void xen_uninit_lock_cpu(int cpu)
>> void __init xen_init_spinlocks(void)
>> {
>>
>> - if (!xen_pvspin) {
>> + /* Don't need to use pvqspinlock code if there is only 1 vCPU. */
>> + if (!xen_pvspin || num_possible_cpus() == 1) {
>> printk(KERN_DEBUG "xen: PV spinlocks disabled\n");
>> return;
>> }
>
> I think we need to set xen_pvspin to false for such configurations.
> Notice that xen_init_lock_cpu() will try to perform some additional
> pvspinlock initializations.
>
>
> -boris
The other pvqspinlock initialization has no runtime impact other than
allocating a bit of extra memory. Anyway, I will revise the patch to
disable xen_pvspin under such condition.
-Longman