Received: by 10.213.65.68 with SMTP id h4csp2161458imn; Thu, 5 Apr 2018 10:00:20 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+0Q4XfwFYi0Lodv152tQItupHcPyM5mrMZyv4GpoXXQXztoZFAoyMKNE5K0oqhSMrx+t3d X-Received: by 10.98.171.7 with SMTP id p7mr17803841pff.215.1522947620687; Thu, 05 Apr 2018 10:00:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947620; cv=none; d=google.com; s=arc-20160816; b=nEXnWE6CvqiYvzY5YJAtVMlg3uG8Kb7a2ozTwRLs5qjMjBnHBu0ldBaloxgG+oNPXl koP1jcvwm7w3IDRMF3nPyMbRBo7X25hKjznTtt5IU9oAIdbasP53CChG0l5GAxGSz7Hm 8LHUaaECBLwbYQ2gjCHIgnUd3E4KjQXjfXVWDwVrXbqKhqqEM2mQWNgchetjKSmnwgu3 Jf9u6MfLk6QTE5Szc/hpw34y3Z5WtN2fhePeSFVRxb9kv8x6bYofO3MAVxcOyeVHwe7p FHZ2VgLPvt1OBQanCWJvM3MsqrxoFy4n9kh5NIKQYwxR7S54Go5HaqF+0lBIvEEjs/a8 rIsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Lp2wDP5tLq2BpDPtRM56uGztsnEKtAtLu0Z8rnqw5oo=; b=KXIZsh3eK2kpCoswfBx1WZAQX7UydDNPVTAepST5t1fPHWafNdWySUcZNQl2KRTsnE zbx+RcGo+N2BgEKFRMjG68j0EhebevtUmDcRSg952tCmrorgh44P56Czw3jbkxHeZg4W yc/9PP+CJ5UmW806hVHrG0UdoCVSOQ+7C/bB1+ApfCkvE3r8+fpTY6DNUfFCNzZz2/Qk unE2hNq/JDg9wQx9Sj/GDeU+bvgKUqrqUBDhgfFqw+bL4Fv/eXOr3M1ijWzOrPh8iMBD QrNYawDR2khLMjHxv+TOaDZRhL8HgaqpqU0Cg8YAANK2GM9yX8+JJm3UGbgScohdtMcb 3lpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x5-v6si6531345plr.680.2018.04.05.10.00.06; Thu, 05 Apr 2018 10:00:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751593AbeDEQ66 (ORCPT + 99 others); Thu, 5 Apr 2018 12:58:58 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:57324 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751361AbeDEQ6z (ORCPT ); Thu, 5 Apr 2018 12:58:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 463971650; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 185373F25D; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id ADE361AE5562; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon Subject: [PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue Date: Thu, 5 Apr 2018 17:59:00 +0100 Message-Id: <1522947547-24081-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a queued locker reaches the head of the queue, it claims the lock by setting _Q_LOCKED_VAL in the lockword. If there isn't contention, it must also clear the tail as part of this operation so that subsequent lockers can avoid taking the slowpath altogether. Currently this is expressed as a cmpxchg loop that practically only runs up to two iterations. This is confusing to the reader and unhelpful to the compiler. Rewrite the cmpxchg loop without the loop, so that a failed cmpxchg implies that there is contention and we just need to write to _Q_LOCKED_VAL without considering the rest of the lockword. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- kernel/locking/qspinlock.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index b75361d23ea5..cdfa7b7328a8 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -457,24 +457,21 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * and nobody is pending, clear the tail code and grab the lock. * Otherwise, we only need to grab the lock. */ - for (;;) { - /* In the PV case we might already have _Q_LOCKED_VAL set */ - if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) { - set_locked(lock); - break; - } + + /* In the PV case we might already have _Q_LOCKED_VAL set */ + if ((val & _Q_TAIL_MASK) == tail) { /* * The smp_cond_load_acquire() call above has provided the - * necessary acquire semantics required for locking. At most - * two iterations of this loop may be ran. + * necessary acquire semantics required for locking. */ old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL); if (old == val) - goto release; /* No contention */ - - val = old; + goto release; /* No contention */ } + /* Either somebody is queued behind us or _Q_PENDING_VAL is set */ + set_locked(lock); + /* * contended path; wait for next if not observed yet, release. */ -- 2.1.4