Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752153AbbGOGOW (ORCPT ); Wed, 15 Jul 2015 02:14:22 -0400 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:42321 "EHLO e23smtp09.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750772AbbGOGOV (ORCPT ); Wed, 15 Jul 2015 02:14:21 -0400 X-Helo: d23dlp01.au.ibm.com X-MailFrom: raghavendra.kt@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Message-ID: <55A5FA2E.1040406@linux.vnet.ibm.com> Date: Wed, 15 Jul 2015 11:44:06 +0530 From: Raghavendra K T Organization: IBM User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7 MIME-Version: 1.0 To: Waiman Long CC: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso Subject: Re: [PATCH v2 5/6] locking/pvqspinlock: Opportunistically defer kicking to unlock time References: <1436926417-20256-1-git-send-email-Waiman.Long@hp.com> <1436926417-20256-6-git-send-email-Waiman.Long@hp.com> In-Reply-To: <1436926417-20256-6-git-send-email-Waiman.Long@hp.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15071506-0033-0000-0000-000001CB150B Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1647 Lines: 39 On 07/15/2015 07:43 AM, Waiman Long wrote: > Performing CPU kicking at lock time can be a bit faster if there > is no kick-ahead. On the other hand, deferring it to unlock time is > preferrable when kick-ahead can be performed or when the VM guest is > having too few vCPUs that a vCPU may be kicked twice before getting > the lock. This patch implements the deferring kicking when either > one of the above 2 conditions is true. > > Linux kernel builds were run in KVM guest on an 8-socket, 4 > cores/socket Westmere-EX system and a 4-socket, 8 cores/socket > Haswell-EX system. Both systems are configured to have 32 physical > CPUs. The kernel build times before and after the patch were: > > Westmere Haswell > Patch 32 vCPUs 48 vCPUs 32 vCPUs 48 vCPUs > ----- -------- -------- -------- -------- > Before patch 3m27.4s 10m32.0s 2m00.8s 14m52.5s > After patch 3m01.3s 9m50.9s 2m00.5s 13m28.1s > > On Westmere, both 32/48 vCPUs case showed some sizeable increase > in performance. For Haswell, there was some improvement in the > overcommitted (48 vCPUs) case. > > Signed-off-by: Waiman Long > --- Hi Waiman, For virtual guests, from my experiments, lock waiter preemption was the main concern especially with fair locks. I find that these set of patches are in right direction to address them. Thanks for the patches. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/