Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752218AbbKZDtR (ORCPT ); Wed, 25 Nov 2015 22:49:17 -0500 Received: from mail-wm0-f47.google.com ([74.125.82.47]:36835 "EHLO mail-wm0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751910AbbKZDtP (ORCPT ); Wed, 25 Nov 2015 22:49:15 -0500 MIME-Version: 1.0 In-Reply-To: <56560691.8000702@hpe.com> References: <563B8E85.6090104@hpe.com> <563CE5A6.8080409@hpe.com> <56560691.8000702@hpe.com> Date: Thu, 26 Nov 2015 11:49:14 +0800 Message-ID: Subject: Re: Improve spinlock performance by moving work to one core From: Ling Ma To: Waiman Long Cc: Peter Zijlstra , mingo@redhat.com, linux-kernel@vger.kernel.org, Ling Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2718 Lines: 84 Hi Longman, All compared data is from the below operation in spinlock-test.patch: +#if ORG_QUEUED_SPINLOCK + org_queued_spin_lock((struct qspinlock *)&pa.n->list_lock); + refill_fn(&pa); + org_queued_spin_unlock((struct qspinlock *)&pa.n->list_lock); +#else + new_spin_lock((struct nspinlock *)&pa.n->list_lock, refill_fn, &pa); +#endif and +#if ORG_QUEUED_SPINLOCK + org_queued_spin_lock((struct qspinlock *)&pa.n->list_lock); + flusharray_fn(&pa); + org_queued_spin_unlock((struct qspinlock *)&pa.n->list_lock); +#else + new_spin_lock((struct nspinlock *)&pa.n->list_lock, flusharray_fn, &pa); +#endif So the result is correct and fair. Yes, we updated the code in include/asm-generic/qspinlock.h to simplified modification and avoid kernel crash, for example there are 10 lock scenarios to use new spin lock, because bottle-neck is only from one or two scenarios, we only modify them, other lock scenarios will continue to use the lock in qspinlock.h, we must modify the code, otherwise the operation will be hooked in the queued and never be waken up. Thanks Ling 2015-11-26 3:05 GMT+08:00 Waiman Long : > On 11/23/2015 04:41 AM, Ling Ma wrote: >> Hi Longman, >> >> Attachments include user space application thread.c and kernel patch >> spinlock-test.patch based on kernel 4.3.0-rc4 >> >> we run thread.c with kernel patch, test original and new spinlock respectively, >> perf top -G indicates thread.c cause cache_alloc_refill and >> cache_flusharray functions to spend ~25% time on original spinlock, >> after introducing new spinlock in two functions, the cost time become ~22%. >> >> The printed data also tell us the new spinlock improves performance >> by about 15%( 93841765576 / 81036259588) on E5-2699V3 >> >> Appreciate your comments. >> >> > > I saw that you make the following changes in the code: > > static __always_inline void queued_spin_lock(struct qspinlock *lock) > { > u32 val; > - > +repeat: > val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); > if (likely(val == 0)) > return; > - queued_spin_lock_slowpath(lock, val); > + goto repeat; > + //queued_spin_lock_slowpath(lock, val); > } > > > This effectively changes the queued spinlock into an unfair byte lock. > Without a pause to moderate the cmpxchg() call, that is especially bad > for performance. Is the performance data above refers to the unfair byte > lock versus your new spinlock? > > Cheers, > Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/