Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp7227647rdb; Wed, 3 Jan 2024 08:33:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IEEFEVnnEK4d2N90PFKGYwL96V38qskV9M+Po5WhoeGf875PTSYrJ7RN6UXjz41oS22Z9Ad X-Received: by 2002:a17:906:ff42:b0:a27:fb3f:6115 with SMTP id zo2-20020a170906ff4200b00a27fb3f6115mr1190479ejb.223.1704299611019; Wed, 03 Jan 2024 08:33:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704299611; cv=none; d=google.com; s=arc-20160816; b=a3/LpzojN6rpyB1d6UggTseR8cR0RSo1CgdY1quGUSnjyEErDe1D+fWd0Q7lI7o8Uf U+m7JSUA/tPu5TFePl/wjDQZ1ZEDfEjo22Ia1TBCsZIs2usaIM+jQgXQrRc5+JT8PU65 4IPDl2eqFhl/Zu+niqx6hqDCpbqnl1bbyJmeDruNdyaq3aPgVQv4Wy45JF+LOFNoyvIE pMy8hf1+G8B94CNff2iZeqT4IL6c3Tr4oFqEnbc7xgab6otJsJCTOtn/wYznuX1mj6An WhgHkENQrcKv+EzAlkvYNjdhO733T1XXdnKN/Li+uL/H25WwAIs0zwBJivt2zlPvuUYi R3dA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=DNxCEyFjygCY2k/IIyJiEePHcERbzG2CmUQITkDIUjg=; fh=ZyX9FIKrDUPlKCmQ8VFqBbPoglOAXEr67cYp/H+I5V8=; b=HXcVyZjqdY3KqqQgTSProkbDyeqzX3gqxq1KRiJ9h09K8kWflqH6v2x6XyEy33BlVU z9lTxZjzukR1fBPQuSyxdBhXASj51lt01X0e7JpxJWDo7D6Y01eagTuqWCRZjfiUHYvL Xs6eFTE33nx2mHzRgZ9K85Xff7l/V4I0WbDZ9HbuMt19jPsp9I0h7cxjrDmbV0zP6hqy I6f19NGGwdzz2Fz3l3UhIQCnV1jW6CusITs8W0CycTETUNY5JhZnkjjmRhXXX25XoxH4 hLjDaooHUUH4PTZ6flP/IBcDKY+bDpkgKjUrQ4PhnUYp+9yHAhoVN7dk0nmlKqXI3JQ2 Cykw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PU+IGV1A; spf=pass (google.com: domain of linux-kernel+bounces-15757-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15757-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id o19-20020a1709061b1300b00a26d8051b01si9384782ejg.282.2024.01.03.08.33.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Jan 2024 08:33:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-15757-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=PU+IGV1A; spf=pass (google.com: domain of linux-kernel+bounces-15757-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15757-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 96F741F247C7 for ; Wed, 3 Jan 2024 16:33:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 93AB61CA96; Wed, 3 Jan 2024 16:33:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PU+IGV1A" X-Original-To: linux-kernel@vger.kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A0191CA85 for ; Wed, 3 Jan 2024 16:32:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704299578; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DNxCEyFjygCY2k/IIyJiEePHcERbzG2CmUQITkDIUjg=; b=PU+IGV1A75Te7BOue3fGa2OYJzVfSwDipPgbMxLbG15Q00ipprHVKpNk/n8vKNzhyejgkl Pf28c2zPk4toDYfh9E7AgPRsVTLJOkDWYDf020SUhd2MgujoGoeb4lCgavpJjlr4Gwtaq/ h1QJWrxkWuETPbIoJFywvqYVNKL81bo= Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-296-kibDYC4qOfKn_voFnZA2QA-1; Wed, 03 Jan 2024 11:32:49 -0500 X-MC-Unique: kibDYC4qOfKn_voFnZA2QA-1 Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-1d3eaaaa5a1so4722205ad.1 for ; Wed, 03 Jan 2024 08:32:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704299568; x=1704904368; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DNxCEyFjygCY2k/IIyJiEePHcERbzG2CmUQITkDIUjg=; b=T8kHsp6kg+9/vutkyFVIxuM6L4YmsmNIEOBvYwbkMKqu78h7bI3wDSrPj2Fjh+eZpR ulw6I7xtJnCSjXExCDzSZ4Gyji21d+9N9TN7crMhvogWY5cqVMLRVpSR4k/iAp/c8IMG jd6dp3fPR4kEVSneyAuZn0ABCXMJREO7QIMrQloz1hGt+zYwle8nWTS8TNNJJ2zjbgUP dUVBOBVRxRCzHlLB6MJqWSRJQLMcG9utKKPoYNRMV+QCV57h/QSiBaMvhNG1rFZDevwF 5C9YVapMMb87QqS0G3P3RCsSSrBxi526pwK7jJ6bMVHObmbv25mVWY6BCRUV/3atVknh IY0g== X-Gm-Message-State: AOJu0YyC7GdUhSPUJg4Mq9lafk/b4uhyHu780RQHRl5BPZk5jSLFDi1u ry6nPfjMojzb/05xJTGxZECqfSlIUhZX9X7EhRpiZ8c60XQrThpXW47QwK2IujDEfm/gqfvr8LV LY2b3YsWSap4khfDdH74rFIjiJWozl2pu X-Received: by 2002:a17:903:cb:b0:1d4:94f7:b349 with SMTP id x11-20020a17090300cb00b001d494f7b349mr1393333plc.54.1704299568593; Wed, 03 Jan 2024 08:32:48 -0800 (PST) X-Received: by 2002:a17:903:cb:b0:1d4:94f7:b349 with SMTP id x11-20020a17090300cb00b001d494f7b349mr1393323plc.54.1704299568269; Wed, 03 Jan 2024 08:32:48 -0800 (PST) Received: from localhost.localdomain ([2804:431:c7ec:911:6911:ca60:846:eb46]) by smtp.gmail.com with ESMTPSA id e12-20020a170902b78c00b001cfca7b8ee7sm23930425pls.99.2024.01.03.08.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Jan 2024 08:32:47 -0800 (PST) From: Leonardo Bras To: Will Deacon , Peter Zijlstra , Boqun Feng , Mark Rutland , Paul Walmsley , Palmer Dabbelt , Albert Ou , Leonardo Bras , Guo Ren , Andrea Parri , Geert Uytterhoeven , Ingo Molnar , Andrzej Hajda Cc: linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Subject: [PATCH v1 3/5] riscv/atomic.h : Deduplicate arch_atomic.* Date: Wed, 3 Jan 2024 13:32:01 -0300 Message-ID: <20240103163203.72768-5-leobras@redhat.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240103163203.72768-2-leobras@redhat.com> References: <20240103163203.72768-2-leobras@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Some functions use mostly the same asm for 32-bit and 64-bit versions. Make a macro that is generic enough and avoid code duplication. (This did not cause any change in generated asm) Signed-off-by: Leonardo Bras Reviewed-by: Guo Ren Reviewed-by: Andrea Parri Tested-by: Guo Ren --- arch/riscv/include/asm/atomic.h | 164 +++++++++++++++----------------- 1 file changed, 76 insertions(+), 88 deletions(-) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index f5dfef6c2153f..80cca7ac16fd3 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -196,22 +196,28 @@ ATOMIC_OPS(xor, xor, i) #undef ATOMIC_FETCH_OP #undef ATOMIC_OP_RETURN +#define _arch_atomic_fetch_add_unless(_prev, _rc, counter, _a, _u, sfx) \ +({ \ + __asm__ __volatile__ ( \ + "0: lr." sfx " %[p], %[c]\n" \ + " beq %[p], %[u], 1f\n" \ + " add %[rc], %[p], %[a]\n" \ + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ + " bnez %[rc], 0b\n" \ + " fence rw, rw\n" \ + "1:\n" \ + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ + : [a]"r" (_a), [u]"r" (_u) \ + : "memory"); \ +}) + /* This is required to provide a full barrier on success. */ static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int prev, rc; - __asm__ __volatile__ ( - "0: lr.w %[p], %[c]\n" - " beq %[p], %[u], 1f\n" - " add %[rc], %[p], %[a]\n" - " sc.w.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : [a]"r" (a), [u]"r" (u) - : "memory"); + _arch_atomic_fetch_add_unless(prev, rc, v->counter, a, u, "w"); + return prev; } #define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless @@ -222,77 +228,86 @@ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 prev; long rc; - __asm__ __volatile__ ( - "0: lr.d %[p], %[c]\n" - " beq %[p], %[u], 1f\n" - " add %[rc], %[p], %[a]\n" - " sc.d.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : [a]"r" (a), [u]"r" (u) - : "memory"); + _arch_atomic_fetch_add_unless(prev, rc, v->counter, a, u, "d"); + return prev; } #define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless #endif +#define _arch_atomic_inc_unless_negative(_prev, _rc, counter, sfx) \ +({ \ + __asm__ __volatile__ ( \ + "0: lr." sfx " %[p], %[c]\n" \ + " bltz %[p], 1f\n" \ + " addi %[rc], %[p], 1\n" \ + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ + " bnez %[rc], 0b\n" \ + " fence rw, rw\n" \ + "1:\n" \ + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ + : \ + : "memory"); \ +}) + static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v) { int prev, rc; - __asm__ __volatile__ ( - "0: lr.w %[p], %[c]\n" - " bltz %[p], 1f\n" - " addi %[rc], %[p], 1\n" - " sc.w.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : - : "memory"); + _arch_atomic_inc_unless_negative(prev, rc, v->counter, "w"); + return !(prev < 0); } #define arch_atomic_inc_unless_negative arch_atomic_inc_unless_negative +#define _arch_atomic_dec_unless_positive(_prev, _rc, counter, sfx) \ +({ \ + __asm__ __volatile__ ( \ + "0: lr." sfx " %[p], %[c]\n" \ + " bgtz %[p], 1f\n" \ + " addi %[rc], %[p], -1\n" \ + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ + " bnez %[rc], 0b\n" \ + " fence rw, rw\n" \ + "1:\n" \ + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ + : \ + : "memory"); \ +}) + static __always_inline bool arch_atomic_dec_unless_positive(atomic_t *v) { int prev, rc; - __asm__ __volatile__ ( - "0: lr.w %[p], %[c]\n" - " bgtz %[p], 1f\n" - " addi %[rc], %[p], -1\n" - " sc.w.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : - : "memory"); + _arch_atomic_dec_unless_positive(prev, rc, v->counter, "w"); + return !(prev > 0); } #define arch_atomic_dec_unless_positive arch_atomic_dec_unless_positive +#define _arch_atomic_dec_if_positive(_prev, _rc, counter, sfx) \ +({ \ + __asm__ __volatile__ ( \ + "0: lr." sfx " %[p], %[c]\n" \ + " addi %[rc], %[p], -1\n" \ + " bltz %[rc], 1f\n" \ + " sc." sfx ".rl %[rc], %[rc], %[c]\n" \ + " bnez %[rc], 0b\n" \ + " fence rw, rw\n" \ + "1:\n" \ + : [p]"=&r" (_prev), [rc]"=&r" (_rc), [c]"+A" (counter) \ + : \ + : "memory"); \ +}) + static __always_inline int arch_atomic_dec_if_positive(atomic_t *v) { int prev, rc; - __asm__ __volatile__ ( - "0: lr.w %[p], %[c]\n" - " addi %[rc], %[p], -1\n" - " bltz %[rc], 1f\n" - " sc.w.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : - : "memory"); + _arch_atomic_dec_if_positive(prev, rc, v->counter, "w"); + return prev - 1; } @@ -304,17 +319,8 @@ static __always_inline bool arch_atomic64_inc_unless_negative(atomic64_t *v) s64 prev; long rc; - __asm__ __volatile__ ( - "0: lr.d %[p], %[c]\n" - " bltz %[p], 1f\n" - " addi %[rc], %[p], 1\n" - " sc.d.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : - : "memory"); + _arch_atomic_inc_unless_negative(prev, rc, v->counter, "d"); + return !(prev < 0); } @@ -325,17 +331,8 @@ static __always_inline bool arch_atomic64_dec_unless_positive(atomic64_t *v) s64 prev; long rc; - __asm__ __volatile__ ( - "0: lr.d %[p], %[c]\n" - " bgtz %[p], 1f\n" - " addi %[rc], %[p], -1\n" - " sc.d.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : - : "memory"); + _arch_atomic_dec_unless_positive(prev, rc, v->counter, "d"); + return !(prev > 0); } @@ -346,17 +343,8 @@ static __always_inline s64 arch_atomic64_dec_if_positive(atomic64_t *v) s64 prev; long rc; - __asm__ __volatile__ ( - "0: lr.d %[p], %[c]\n" - " addi %[rc], %[p], -1\n" - " bltz %[rc], 1f\n" - " sc.d.rl %[rc], %[rc], %[c]\n" - " bnez %[rc], 0b\n" - " fence rw, rw\n" - "1:\n" - : [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter) - : - : "memory"); + _arch_atomic_dec_if_positive(prev, rc, v->counter, "d"); + return prev - 1; } -- 2.43.0