Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp2432888pxf; Sat, 27 Mar 2021 11:11:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwRHiR4hW7/z5T61ZxuIb/Z9VkowWKrJwrEgtqhqjm6SFRs6CXCrWlkhtgRAiwUQ91eXoWy X-Received: by 2002:a17:906:7db:: with SMTP id m27mr4013215ejc.484.1616868669935; Sat, 27 Mar 2021 11:11:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616868669; cv=none; d=google.com; s=arc-20160816; b=taZtkgQ7Vn5X2mf3ZKulMfdvwmW88SGry9ge2ce1A4etOK3p5TlSH6gX6v/x5JZonb gn9ikQf9LjT6DHYNcwRHZL9Kvq0sDuiJENe0GzxL5nQfsDcLxGOQijZsnbO0+h6y6YAH gCdwhVC/G2FBGpJ1HiOqxfGvG4k2eVrFX3eWZ01UwD5di2eP8znPBg7xcJpZJHAadHQF vBUy1A+JrxzJA7pnjJCEm5Imfdx5Y7rcIAZ5+ZjLXmNeMx6kPRdd5BFRKLCN1JcWG+dg CVnaO89BEdukfDsH/5+8G43rDFauXZvmPu5qsq2Et/NHtnOzcS97ZmEePFJfnrLiSLKT zHPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=Vc6SLW/b2GuQFt0dYmnveX+rSq8jFahvuErQQWkEszo=; b=0q+3Une2jvVLByZ8Xp9kX5ua24I3BWy/RRuUZZSOUBmlDAAEIpSXpYbxGU0buz9TQt zdvbENAoJxWBqwjD8faUChGHXLHXY9d1Jy3GnUu0sE6LQhrbW5e5sI0oSO02tsb8tTSr bTlmSrTst63Hv4abUErf5yp3LRLkFwMV04eAe8Y7xAGC0Fw7DuakV0EWgHPcQX3mkYRm jVmbQtstmakJuIV9pGxIYpiZIK2ikbo4kmkfvkdo894VS2+F/2jU8oOtYC52PALPp5D+ IQhKh4LhT4h0texjHYWiTza6QpPqE10HiFK2/gG8ouwCKsCfOeFeDKEx08OH+/U8ueDi MeFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=chiiL0yv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h1si9459109edn.233.2021.03.27.11.10.47; Sat, 27 Mar 2021 11:11:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=chiiL0yv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230440AbhC0SIA (ORCPT + 99 others); Sat, 27 Mar 2021 14:08:00 -0400 Received: from mail.kernel.org ([198.145.29.99]:42512 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230092AbhC0SHf (ORCPT ); Sat, 27 Mar 2021 14:07:35 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6195161965; Sat, 27 Mar 2021 18:07:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616868455; bh=LvRCrrCzj8udXkBA6WRXdyg71qL92AhvWoldyruR39k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=chiiL0yvB6GLw1pCkrzaWTJFvn76YRJU/n+BZBYztHPTGh+uoGLlmhRgoAQNlC54g OXCeIFjmYUhLqapkXYOi/+TB1ppmtdmV6JWFIoTlQf01SnYuPqe+BS30sWdry4VzOj iets2HU2I+h8s4RGn2zEFvrJPF6S3z7lPYsL+zTbW9dtL2wCMT0XrYdp+YEyYrXv3I zlBfOGPI6QPpa1G6XxskkEBivoktFSRd97qELc2fE4m63miZFBlygrMQ61hkcB91PA hbed4poQHYm7sNaEmRumdO5k4br16LR++Z3EUsqz7E6qeHcS8GnSNQpGabM1eDqiZO fMQgTZXR28bkA== From: guoren@kernel.org To: guoren@kernel.org Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-csky@vger.kernel.org, linux-arch@vger.kernel.org, Guo Ren , Peter Zijlstra , Will Deacon , Ingo Molnar , Waiman Long , Arnd Bergmann , Anup Patel Subject: [PATCH v4 3/4] locking/qspinlock: Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32 Date: Sat, 27 Mar 2021 18:06:38 +0000 Message-Id: <1616868399-82848-4-git-send-email-guoren@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1616868399-82848-1-git-send-email-guoren@kernel.org> References: <1616868399-82848-1-git-send-email-guoren@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Guo Ren Some architectures don't have sub-word swap atomic instruction, they only have the full word's one. The sub-word swap only improve the performance when: NR_CPUS < 16K * 0- 7: locked byte * 8: pending * 9-15: not used * 16-17: tail index * 18-31: tail cpu (+1) The 9-15 bits are wasted to use xchg16 in xchg_tail. Please let architecture select xchg16/xchg32 to implement xchg_tail. Signed-off-by: Guo Ren Cc: Peter Zijlstra Cc: Will Deacon Cc: Ingo Molnar Cc: Waiman Long Cc: Arnd Bergmann Cc: Anup Patel --- kernel/Kconfig.locks | 3 +++ kernel/locking/qspinlock.c | 44 +++++++++++++++++++++----------------- 2 files changed, 27 insertions(+), 20 deletions(-) diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks index 3de8fd11873b..d02f1261f73f 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -239,6 +239,9 @@ config LOCK_SPIN_ON_OWNER config ARCH_USE_QUEUED_SPINLOCKS bool +config ARCH_USE_QUEUED_SPINLOCKS_XCHG32 + bool + config QUEUED_SPINLOCKS def_bool y if ARCH_USE_QUEUED_SPINLOCKS depends on SMP diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index cbff6ba53d56..54de0632c6a8 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -163,26 +163,6 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock) WRITE_ONCE(lock->locked_pending, _Q_LOCKED_VAL); } -/* - * xchg_tail - Put in the new queue tail code word & retrieve previous one - * @lock : Pointer to queued spinlock structure - * @tail : The new queue tail code word - * Return: The previous queue tail code word - * - * xchg(lock, tail), which heads an address dependency - * - * p,*,* -> n,*,* ; prev = xchg(lock, node) - */ -static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) -{ - /* - * We can use relaxed semantics since the caller ensures that the - * MCS node is properly initialized before updating the tail. - */ - return (u32)xchg_relaxed(&lock->tail, - tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET; -} - #else /* _Q_PENDING_BITS == 8 */ /** @@ -206,6 +186,30 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock) { atomic_add(-_Q_PENDING_VAL + _Q_LOCKED_VAL, &lock->val); } +#endif + +#if _Q_PENDING_BITS == 8 && !defined(CONFIG_ARCH_USE_QUEUED_SPINLOCKS_XCHG32) +/* + * xchg_tail - Put in the new queue tail code word & retrieve previous one + * @lock : Pointer to queued spinlock structure + * @tail : The new queue tail code word + * Return: The previous queue tail code word + * + * xchg(lock, tail), which heads an address dependency + * + * p,*,* -> n,*,* ; prev = xchg(lock, node) + */ +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) +{ + /* + * We can use relaxed semantics since the caller ensures that the + * MCS node is properly initialized before updating the tail. + */ + return (u32)xchg_relaxed(&lock->tail, + tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET; +} + +#else /** * xchg_tail - Put in the new queue tail code word & retrieve previous one -- 2.17.1