Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932674AbbHHDTG (ORCPT ); Fri, 7 Aug 2015 23:19:06 -0400 Received: from g4t3426.houston.hp.com ([15.201.208.54]:39915 "EHLO g4t3426.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932174AbbHHDS3 (ORCPT ); Fri, 7 Aug 2015 23:18:29 -0400 From: Waiman Long To: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso , Waiman Long Subject: [PATCH v5 5/6] locking/pvqspinlock: Allow vCPUs kick-ahead Date: Fri, 7 Aug 2015 23:18:00 -0400 Message-Id: <1439003881-17349-6-git-send-email-Waiman.Long@hp.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1439003881-17349-1-git-send-email-Waiman.Long@hp.com> References: <1439003881-17349-1-git-send-email-Waiman.Long@hp.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6664 Lines: 186 Frequent CPU halting (vmexit) and CPU kicking (vmenter) lengthens critical section and block forward progress. This patch implements a kick-ahead mechanism where the unlocker will kick the queue head vCPUs as well as up to four additional vCPUs next to the queue head if they were halted. The kickings are done after exiting the critical section to improve parallelism. The amount of kick-ahead allowed depends on the number of vCPUs in the VM guest. Currently it allows up to 1 vCPU kick-ahead per 4 vCPUs available up to a maximum of PV_KICK_AHEAD_MAX (4). There are diminishing returns in increasing the maximum value. The current value of 4 is a compromise of getting a nice performance boost without penalizing too much on the one vCPU that is doing all the kickings. Linux kernel builds were run in KVM guest on an 8-socket, 4 cores/socket Westmere-EX system and a 4-socket, 8 cores/socket Haswell-EX system. Both systems are configured to have 32 physical CPUs. The kernel build times before and after the patch were: Westmere Haswell Patch 32 vCPUs 48 vCPUs 32 vCPUs 48 vCPUs ----- -------- -------- -------- -------- Before patch 3m21.9s 11m20.6s 2m08.6s 17m12.8s After patch 3m03.2s 9m21.1s 2m08.9s 16m14.8s This improves performance quite substantially on Westmere, but not so much on Haswell. Signed-off-by: Waiman Long --- kernel/locking/qspinlock_paravirt.h | 71 +++++++++++++++++++++++++++++++++- 1 files changed, 68 insertions(+), 3 deletions(-) diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h index 7c9d6ed..9996609 100644 --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -57,6 +57,7 @@ enum pv_qlock_stat { pvstat_wait_again, pvstat_kick_wait, pvstat_kick_unlock, + pvstat_kick_ahead, pvstat_pend_lock, pvstat_pend_fail, pvstat_spurious, @@ -77,6 +78,7 @@ static const char * const stat_fsnames[pvstat_num] = { [pvstat_wait_again] = "wait_again_count", [pvstat_kick_wait] = "kick_wait_count", [pvstat_kick_unlock] = "kick_unlock_count", + [pvstat_kick_ahead] = "kick_ahead_count", [pvstat_pend_lock] = "pending_lock_count", [pvstat_pend_fail] = "pending_fail_count", [pvstat_spurious] = "spurious_wakeup", @@ -89,7 +91,7 @@ static atomic_t pvstats[pvstat_num]; * pv_kick_latencies = sum of all pv_kick latencies in ns * pv_wake_latencies = sum of all wakeup latencies in ns * - * Avg kick latency = pv_kick_latencies/kick_unlock_count + * Avg kick latency = pv_kick_latencies/(kick_unlock_count + kick_ahead_count) * Avg wake latency = pv_wake_latencies/kick_wait_count * Avg # of hops/hash = hash_hops_count/kick_unlock_count */ @@ -221,6 +223,18 @@ static struct pv_hash_entry *pv_lock_hash; static unsigned int pv_lock_hash_bits __read_mostly; /* + * Allow kick-ahead of vCPUs at unlock time + * + * The pv_kick_ahead value is set by a simple formula that 1 vCPU kick-ahead + * is allowed per 4 vCPUs available up to a maximum of PV_KICK_AHEAD_MAX. + * There are diminishing returns in increasing PV_KICK_AHEAD_MAX. The current + * value of 4 is a good compromise that gives a good performance boost without + * penalizing the vCPU that is doing the kicking by too much. + */ +#define PV_KICK_AHEAD_MAX 4 +static int pv_kick_ahead __read_mostly; + +/* * Allocate memory for the PV qspinlock hash buckets * * This function should be called from the paravirt spinlock initialization @@ -228,7 +242,8 @@ static unsigned int pv_lock_hash_bits __read_mostly; */ void __init __pv_init_lock_hash(void) { - int pv_hash_size = ALIGN(4 * num_possible_cpus(), PV_HE_PER_LINE); + int ncpus = num_possible_cpus(); + int pv_hash_size = ALIGN(4 * ncpus, PV_HE_PER_LINE); if (pv_hash_size < PV_HE_MIN) pv_hash_size = PV_HE_MIN; @@ -242,6 +257,13 @@ void __init __pv_init_lock_hash(void) pv_hash_size, 0, HASH_EARLY, &pv_lock_hash_bits, NULL, pv_hash_size, pv_hash_size); + /* + * Enable the unlock kick ahead mode according to the number of + * vCPUs available. + */ + pv_kick_ahead = min(ncpus/4, PV_KICK_AHEAD_MAX); + if (pv_kick_ahead) + pr_info("PV unlock kick ahead max count = %d\n", pv_kick_ahead); } #define for_each_hash_entry(he, offset, hash) \ @@ -551,6 +573,26 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) } /* + * Helper to get the address of the next kickable node + * + * The node has to be in the halted state. The state will then be + * transitioned to the running state. If no kickable node is found, NULL + * will be returned. + */ +static inline struct pv_node *pv_get_kick_node(struct pv_node *node) +{ + struct pv_node *next = (struct pv_node *)READ_ONCE(node->mcs.next); + + if (!next || (READ_ONCE(next->state) != vcpu_halted)) + return NULL; + + if (xchg(&next->state, vcpu_running) != vcpu_halted) + next = NULL; /* No kicking is needed */ + + return next; +} + +/* * PV versions of the unlock fastpath and slowpath functions to be used * instead of queued_spin_unlock(). */ @@ -558,7 +600,8 @@ __visible void __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked) { struct __qspinlock *l = (void *)lock; - struct pv_node *node; + struct pv_node *node, *next; + int i, nr_kick, cpus[PV_KICK_AHEAD_MAX]; if (unlikely(locked != _Q_SLOW_VAL)) { WARN(!debug_locks_silent, @@ -583,6 +626,20 @@ __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked) node = pv_unhash(lock); /* + * Implement unlock kick-ahead + * + * Access the next group of nodes, if available, and prepare to kick + * them after releasing the lock if they are in the halted state. This + * should improve performance on an overcommitted system. + */ + for (nr_kick = 0, next = node; nr_kick < pv_kick_ahead; nr_kick++) { + next = pv_get_kick_node(next); + if (!next) + break; + cpus[nr_kick] = next->cpu; + } + + /* * Now that we have a reference to the (likely) blocked pv_node, * release the lock. */ @@ -597,6 +654,14 @@ __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked) */ pvstat_inc(pvstat_kick_unlock); pv_kick(node->cpu); + + /* + * Kick the next group of vCPUs, if available. + */ + for (i = 0; i < nr_kick; i++) { + pvstat_inc(pvstat_kick_ahead); + pv_kick(cpus[i]); + } } /* -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/