Received: by 10.213.65.68 with SMTP id h4csp2165606imn; Thu, 5 Apr 2018 10:03:31 -0700 (PDT) X-Google-Smtp-Source: AIpwx48HFsIEP9EktoxwIEcRp9uyaXrd3Q2TSn31fOw4RUG/gK28nNBmfjhkm3uJ3gAGjqrlcpPX X-Received: by 2002:a17:902:b602:: with SMTP id b2-v6mr23989351pls.11.1522947811157; Thu, 05 Apr 2018 10:03:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522947811; cv=none; d=google.com; s=arc-20160816; b=k+CF3yIFiXJzvgi2JA2FKXz6NIvCmI07wVQ+YSv42doL9qf8agz6W+VUuKK6YUgTt9 msWi0umKiF6IuggmDsrrMIFl7TKbfxF87GEtuCW97cLETxzd6CFNlGMfI21OWQcQ9qMd X4W4QMefqSLTJfr8gi2zolly/3w7rgj6jRXKr6hxAhJZKafzEBFxkBnuACeG0BcmFH59 PvIv3DtqCSa2SW65S9pdeRDpJDCF+pBDk98QMuHa03ZjxC3QiFg965M6GZ1NqXoeZiYU xbbUHRRDz8nJCLnN3X/lOpxC2lr+7N89BVAgxWuQa/2gDgDSppiQcStBQIeIdz1gKB7/ xrjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=aGC+/IWuWBRpqe+yYopyg4Shsr9fKLlcqMfJr5NkoRA=; b=G5BARJnKim8Skibr3lQXEFDP/Ejr5XdnircpLKowvl2Bhuu7f8nFKHvDWTjVZMol2F ssGgXjnMq0pFKaLqxVc6IMR9ex52CLUmuzz4J2nf6xvHfj5J2kSuQxGVTG6A9XiknJYV HwnBsl2DMc/OwmUxCQoGBvFvKQ2O8O4gipmFDGu8pa6GplpmMH+0cdrv4y4oABjpOlXz I9RDT8hazc6KDBXVTmQmtk7IHSBNDWBO4QqvPphR96weCpcwDuchczCexOvQ5vMZrKzf cLdscwlhJP5BsVjrHc8fFkBYOGnui0s10WRDqSGcJu9jlI1oq34i41KJ7ujutdzh+h5X DDCA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m6-v6si8729431pln.257.2018.04.05.10.03.17; Thu, 05 Apr 2018 10:03:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752034AbeDERAq (ORCPT + 99 others); Thu, 5 Apr 2018 13:00:46 -0400 Received: from foss.arm.com ([217.140.101.70]:57352 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751363AbeDEQ6z (ORCPT ); Thu, 5 Apr 2018 12:58:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 584AB165C; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2A4943F5B1; Thu, 5 Apr 2018 09:58:55 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id C15781AE55A3; Thu, 5 Apr 2018 17:59:08 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon Subject: [PATCH 04/10] locking/qspinlock: Use atomic_cond_read_acquire Date: Thu, 5 Apr 2018 17:59:01 +0100 Message-Id: <1522947547-24081-5-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> References: <1522947547-24081-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rather than dig into the counter field of the atomic_t inside the qspinlock structure so that we can call smp_cond_load_acquire, use atomic_cond_read_acquire instead, which operates on the atomic_t directly. Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Will Deacon --- kernel/locking/qspinlock.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index cdfa7b7328a8..291e1526d27b 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -331,8 +331,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * barriers. */ if (val & _Q_LOCKED_MASK) - smp_cond_load_acquire(&lock->val.counter, - !(VAL & _Q_LOCKED_MASK)); + atomic_cond_read_acquire(&lock->val, + !(VAL & _Q_LOCKED_MASK)); /* * take ownership and clear the pending bit. * @@ -433,8 +433,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * * The PV pv_wait_head_or_lock function, if active, will acquire * the lock and return a non-zero value. So we have to skip the - * smp_cond_load_acquire() call. As the next PV queue head hasn't been - * designated yet, there is no way for the locked value to become + * atomic_cond_read_acquire() call. As the next PV queue head hasn't + * been designated yet, there is no way for the locked value to become * _Q_SLOW_VAL. So both the set_locked() and the * atomic_cmpxchg_relaxed() calls will be safe. * @@ -444,7 +444,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if ((val = pv_wait_head_or_lock(lock, node))) goto locked; - val = smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_PENDING_MASK)); + val = atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK)); locked: /* @@ -461,7 +461,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) /* In the PV case we might already have _Q_LOCKED_VAL set */ if ((val & _Q_TAIL_MASK) == tail) { /* - * The smp_cond_load_acquire() call above has provided the + * The atomic_cond_read_acquire() call above has provided the * necessary acquire semantics required for locking. */ old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL); -- 2.1.4