Received: by 2002:ab2:7b86:0:b0:1f7:5705:b850 with SMTP id q6csp863770lqh; Sun, 5 May 2024 05:34:21 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCUgRprrjulft6q7jK7KHgIUn6IZUa50gABZEjuHZBSDsxQcLwJFzXwF3wBTYqTTTnXChY2XMtgUR45s0NmQ3xHfnAMGGYdBBwDtyG/LoA== X-Google-Smtp-Source: AGHT+IERO6yRLEqInUcTBlv/8vHTu/+T8OF8losz4vfrAB3ckRlL7OikqX5rPuig6a0bBA/qcQ+0 X-Received: by 2002:a05:6808:238d:b0:3c9:6402:5f69 with SMTP id bp13-20020a056808238d00b003c964025f69mr4081173oib.21.1714912461357; Sun, 05 May 2024 05:34:21 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1714912461; cv=pass; d=google.com; s=arc-20160816; b=sqbVvZk74VHirkQ4L480BC51hYp/glqb14z6zbKNze2Ik3zBO1mRk0PfSQJ2YI/PuJ 4t1F3ngQIuT4f/wefs9QKgOs+a6HaC4T8hoAU7K+v4IprKcF2j2JuUypnBGxdzLycm/L bvrISKuIVI2kOlofMDsbS9r04dumDZyHuiXp3NrglMnhte9c8MMQdV6AL5TVamVDm/ky AUyWGXiZW6dQZ8EIxSGHydv5xxfwTDO7zoKDMv3i26Sy6pTii2E4/9sLHguDGgPJt/oo OCqm9IliAmczMQmJVki0gv8LQmWQ2j6fygOY57LF68r5bi3QJ1mXYmG1iYjjcYf8NAwH B6sA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=oHB57plck9OK+2nNyaktoU350BRjaDWkpSD/UGpQOsI=; fh=DewQ3DRgGp3K/1bDThFIRnK0oE+9BDyH+nb1WuwF0k8=; b=OvBfkEXFnSoUaU9h3vtOYlIPqybCXLXphdDp1DT6o65/jUI6wL0gvimRmP9D1Ef/PS Wd3QPXqgKYcawZJ9jKHtcsDDUNqwUAausnnDuseoODZVAhKcxQvCagC0BqjD/JUlBM2R +TYFDob/IkCmAevkSuh3skuWpxIkHJ/kU5Q37QbAuVohpbG9/pdBpKcWdd24VLsPXJuu KrOqHblAgrn6WxNi+cswpwutsLCnB8Jw6pkLNDqvIn0/tOUq6XuQl90zbVgmF7CZuKcc FB/p6FLtgG5p/UX4IUgLE+kBcjtbs7Uyk2WdoCzKFzAEcULfKIFBIlumegNmgq6P9TlV Pb9g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="cl/2yQvF"; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-168971-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-168971-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id ed12-20020ad44eac000000b006a0a1cf21b9si7198149qvb.586.2024.05.05.05.34.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 05 May 2024 05:34:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-168971-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="cl/2yQvF"; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-168971-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-168971-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id BF9BB1C20E24 for ; Sun, 5 May 2024 12:34:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2F70B18C36; Sun, 5 May 2024 12:34:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cl/2yQvF" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27858A21 for ; Sun, 5 May 2024 12:34:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714912454; cv=none; b=cViRS8feEptdgoarzAi03FL2OyhFLtbE5QqBj0jLeX73HsURDuEaa+Oz3cqYkaAYQ9E1PtSBspLiYXXE5kVWpEeHzRyb2oaJEryH5tOhSVUXxZh+9i69V5UOOsFNBubAGa/iocWBOsEptPhLeYlVF0cTyDiYzV/0vgIiAj66tYM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714912454; c=relaxed/simple; bh=wz+6IsEcn6LI+h0NmGwwasc9Ns0cbM/wBeFTv4gFEqU=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=p+V49EERUWyIqsbxfO5BD1c5UInZ98+n23+kx22EKBx87yPf/FUQZPGebTuBltApguLowMvMLOa1qifXqss5QZDfIxodBXCsPL/fHBLue+pYcYipnyHohLLXRysX9TTojta1EW1SNaLnZ39Zv2CwO5gK4CEaGpixV4B4tqlqIB8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cl/2yQvF; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D1ABC113CC; Sun, 5 May 2024 12:34:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714912453; bh=wz+6IsEcn6LI+h0NmGwwasc9Ns0cbM/wBeFTv4gFEqU=; h=From:To:Cc:Subject:Date:From; b=cl/2yQvF5CdVSqf3VIXcreHrM3u0luplWToFnRESPNukpIo0tZEH4F/YwnOyxYKDR Gd/ljuiwSa1AeA8F0Ake2FPTph4coQz7GUbyvgp67NwmDJ5glGt1gxHGoD1cXCykhl 2QiEG29gcIHfsOLZ/ka7Zl0YBPSeoH+1Z1CBNJLBQI1KrJEoA4MYv4ZvMxoJCWAwpO 9AmceNDq7a3tyOg/nIZwpjx3rE8kmGMDxkQot2nbgR6ZI1cZGUN/Xs0h3mvwVPTp06 tmvXqocbUg7ZGA2hj77GWz6zjL7COqrkeb/pXNeDMu+45MmVy/f5Mnpi6n8k9pOA3H /GhWbdC6dAPyw== From: Puranjay Mohan To: Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Paul Walmsley , Palmer Dabbelt , Albert Ou , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Cc: puranjay12@gmail.com Subject: [PATCH] riscv/atomic.h: optimize ops with acquire/release ordering Date: Sun, 5 May 2024 12:33:40 +0000 Message-Id: <20240505123340.38495-1-puranjay@kernel.org> X-Mailer: git-send-email 2.40.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Currently, atomic ops with acquire or release ordering are implemented as atomic ops with relaxed ordering followed by or preceded by an acquire fence or a release fence. Section 8.1 of the "The RISC-V Instruction Set Manual Volume I: Unprivileged ISA", titled, "Specifying Ordering of Atomic Instructions" says: | To provide more efficient support for release consistency [5], each | atomic instruction has two bits, aq and rl, used to specify additional | memory ordering constraints as viewed by other RISC-V harts. and | If only the aq bit is set, the atomic memory operation is treated as | an acquire access. | If only the rl bit is set, the atomic memory operation is treated as a | release access. So, rather than using two instructions (relaxed atomic op + fence), use a single atomic op instruction with acquire/release ordering. Example program: atomic_t cnt = ATOMIC_INIT(0); atomic_fetch_add_acquire(1, &cnt); atomic_fetch_add_release(1, &cnt); Before: amoadd.w a4,a5,(a4) // Atomic add with relaxed ordering fence r,rw // Fence to force Acquire ordering fence rw,w // Fence to force Release ordering amoadd.w a4,a5,(a4) // Atomic add with relaxed ordering After: amoadd.w.aq a4,a5,(a4) // Atomic add with Acquire ordering amoadd.w.rl a4,a5,(a4) // Atomic add with Release ordering Signed-off-by: Puranjay Mohan --- arch/riscv/include/asm/atomic.h | 64 +++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 5b96c2f61adb..024e83936910 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -98,6 +98,30 @@ c_type arch_atomic##prefix##_fetch_##op##_relaxed(c_type i, \ return ret; \ } \ static __always_inline \ +c_type arch_atomic##prefix##_fetch_##op##_acquire(c_type i, \ + atomic##prefix##_t *v) \ +{ \ + register c_type ret; \ + __asm__ __volatile__ ( \ + " amo" #asm_op "." #asm_type ".aq %1, %2, %0" \ + : "+A" (v->counter), "=r" (ret) \ + : "r" (I) \ + : "memory"); \ + return ret; \ +} \ +static __always_inline \ +c_type arch_atomic##prefix##_fetch_##op##_release(c_type i, \ + atomic##prefix##_t *v) \ +{ \ + register c_type ret; \ + __asm__ __volatile__ ( \ + " amo" #asm_op "." #asm_type ".rl %1, %2, %0" \ + : "+A" (v->counter), "=r" (ret) \ + : "r" (I) \ + : "memory"); \ + return ret; \ +} \ +static __always_inline \ c_type arch_atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v) \ { \ register c_type ret; \ @@ -117,6 +141,18 @@ c_type arch_atomic##prefix##_##op##_return_relaxed(c_type i, \ return arch_atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I; \ } \ static __always_inline \ +c_type arch_atomic##prefix##_##op##_return_acquire(c_type i, \ + atomic##prefix##_t *v) \ +{ \ + return arch_atomic##prefix##_fetch_##op##_acquire(i, v) c_op I; \ +} \ +static __always_inline \ +c_type arch_atomic##prefix##_##op##_return_release(c_type i, \ + atomic##prefix##_t *v) \ +{ \ + return arch_atomic##prefix##_fetch_##op##_release(i, v) c_op I; \ +} \ +static __always_inline \ c_type arch_atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \ { \ return arch_atomic##prefix##_fetch_##op(i, v) c_op I; \ @@ -138,23 +174,39 @@ ATOMIC_OPS(add, add, +, i) ATOMIC_OPS(sub, add, +, -i) #define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed +#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire +#define arch_atomic_add_return_release arch_atomic_add_return_release #define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed +#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire +#define arch_atomic_sub_return_release arch_atomic_sub_return_release #define arch_atomic_add_return arch_atomic_add_return #define arch_atomic_sub_return arch_atomic_sub_return #define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed +#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire +#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release #define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed +#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire +#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release #define arch_atomic_fetch_add arch_atomic_fetch_add #define arch_atomic_fetch_sub arch_atomic_fetch_sub #ifndef CONFIG_GENERIC_ATOMIC64 #define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed +#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire +#define arch_atomic64_add_return_release arch_atomic64_add_return_release #define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed +#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire +#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release #define arch_atomic64_add_return arch_atomic64_add_return #define arch_atomic64_sub_return arch_atomic64_sub_return #define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed +#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire +#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release #define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed +#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire +#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release #define arch_atomic64_fetch_add arch_atomic64_fetch_add #define arch_atomic64_fetch_sub arch_atomic64_fetch_sub #endif @@ -175,16 +227,28 @@ ATOMIC_OPS( or, or, i) ATOMIC_OPS(xor, xor, i) #define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed +#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire +#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release #define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed +#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire +#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release #define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed +#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire +#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release #define arch_atomic_fetch_and arch_atomic_fetch_and #define arch_atomic_fetch_or arch_atomic_fetch_or #define arch_atomic_fetch_xor arch_atomic_fetch_xor #ifndef CONFIG_GENERIC_ATOMIC64 #define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed +#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire +#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release #define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed +#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire +#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release #define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed +#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire +#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release #define arch_atomic64_fetch_and arch_atomic64_fetch_and #define arch_atomic64_fetch_or arch_atomic64_fetch_or #define arch_atomic64_fetch_xor arch_atomic64_fetch_xor -- 2.40.1