Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754317AbbKBQku (ORCPT ); Mon, 2 Nov 2015 11:40:50 -0500 Received: from casper.infradead.org ([85.118.1.10]:56602 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753495AbbKBQkq (ORCPT ); Mon, 2 Nov 2015 11:40:46 -0500 Date: Mon, 2 Nov 2015 17:40:40 +0100 From: Peter Zijlstra To: Waiman Long Cc: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso Subject: Re: [PATCH tip/locking/core v9 4/6] locking/pvqspinlock: Collect slowpath lock statistics Message-ID: <20151102164040.GV3604@twins.programming.kicks-ass.net> References: <1446247597-61863-1-git-send-email-Waiman.Long@hpe.com> <1446247597-61863-5-git-send-email-Waiman.Long@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1446247597-61863-5-git-send-email-Waiman.Long@hpe.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2014 Lines: 53 On Fri, Oct 30, 2015 at 07:26:35PM -0400, Waiman Long wrote: > This patch enables the accumulation of kicking and waiting related > PV qspinlock statistics when the new QUEUED_LOCK_STAT configuration > option is selected. It also enables the collection of data which > enable us to calculate the kicking and wakeup latencies which have > a heavy dependency on the CPUs being used. > > The statistical counters are per-cpu variables to minimize the > performance overhead in their updates. These counters are exported > via the sysfs filesystem under the /sys/kernel/qlockstat directory. > When the corresponding sysfs files are read, summation and computing > of the required data are then performed. Why did you switch to sysfs? You can create custom debugfs files too. > @@ -259,7 +275,7 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) > if (READ_ONCE(pn->state) == vcpu_hashed) > lp = (struct qspinlock **)1; > > - for (;;) { > + for (;; waitcnt++) { > for (loop = SPIN_THRESHOLD; loop; loop--) { > if (!READ_ONCE(l->locked)) > return; Did you check that goes away when !STAT ? > +/* > + * Return the average kick latency (ns) = pv_latency_kick/pv_kick_unlock > + */ > +static ssize_t > +kick_latency_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) > +{ > + int cpu; > + u64 latencies = 0, kicks = 0; > + > + for_each_online_cpu(cpu) { I think you need for_each_possible_cpu(), otherwise the results will change with hotplug operations. > + kicks += per_cpu(qstats[qstat_pv_kick_unlock], cpu); > + latencies += per_cpu(qstats[qstat_pv_latency_kick], cpu); > + } > + > + /* Rounded to the nearest ns */ > + return sprintf(buf, "%llu\n", kicks ? (latencies + kicks/2)/kicks : 0); > +} -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/