Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp2428230pxj; Mon, 10 May 2021 02:43:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkcoJ797MoIovPD/CJPoUAR8y3YnVuzwTh8J6KXq7KS6vEVSrK3JPieuYHjEVlSV0Bh/ur X-Received: by 2002:a6b:fb0a:: with SMTP id h10mr13722462iog.167.1620639836793; Mon, 10 May 2021 02:43:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620639836; cv=none; d=google.com; s=arc-20160816; b=w0ixsGId/sd2+bgIjJFuXrHb/Ca0EzSeYgz9fKdr05mZ88z3kvZIBARmKHU3/hHZnO 4kZev+6fsvTBhfqzrBvQVq0ImQmX/Duv5o+F16LDG/hkg4bRgksM7iFkDQy+CFgwFfLx /VzsPRqop+a1OfBdSS74eBjGVolJmNTR/j0ezrKzGCigHl12tY8gb8HrNqnBm1v2jDib UiI3sl5QkmPWr0toFTT1CTFiYiKbg6I/cL1k+V25stlyHkZlRJcugwtetfQAsAQhgyM+ 3DFdSjI17+H9B3ulKq0zpLtrfKfpZOg95BVvhoHcryF+1/hC0PiFRmJJUsL00S760v3z 7QKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=b54wVqk3L5bc/8lpvoP0jbOBiNYoSsbVg7LOCTIcy8I=; b=ah2vNw/vfLeibBCqb49UIGn5WtHFIB7EJyvSJznWMxe2p6wNLCsxXWjWuuc+eWwfxF 3C+XfV55iwyObp9q/OQ6rIjbwXffnIgULGKOwbddOFMeA2j2BmSrzfBZg6sDJqvnONSG DMb2KyuhtzMHzjlM4n8fjo7RVkNMNdMyPH72r5RhWVa89Jk4u4W8CbHLWPJ0QaQFfV0v dN7PB6P+FgKZ9Ur2e/edpwA9zI8B0v5ipjwJiSoSpuVTSB/ZO7OQF+IUgA4xde85OPXe iI1wpQ8Kxb53Gp7H7BqwnSNgmXl3xzhTW5RthpKrOAN2HrRZvApVNsqHAjYJDtmfi1Qk UZcA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id n9si13283384jaj.15.2021.05.10.02.43.44; Mon, 10 May 2021 02:43:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230334AbhEJJoV (ORCPT + 99 others); Mon, 10 May 2021 05:44:21 -0400 Received: from foss.arm.com ([217.140.110.172]:51974 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230363AbhEJJoP (ORCPT ); Mon, 10 May 2021 05:44:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F2CE715BF; Mon, 10 May 2021 02:43:10 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BEB793F73B; Mon, 10 May 2021 02:43:06 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org, will@kernel.org, boqun.feng@gmail.com, peterz@infradead.org Cc: aou@eecs.berkeley.edu, arnd@arndb.de, bcain@codeaurora.org, benh@kernel.crashing.org, chris@zankel.net, dalias@libc.org, davem@davemloft.net, deanbo422@gmail.com, deller@gmx.de, geert@linux-m68k.org, green.hu@gmail.com, guoren@kernel.org, ink@jurassic.park.msu.ru, James.Bottomley@HansenPartnership.com, jcmvbkbc@gmail.com, jonas@southpole.se, ley.foon.tan@intel.com, linux@armlinux.org.uk, mark.rutland@arm.com, mattst88@gmail.com, monstr@monstr.eu, mpe@ellerman.id.au, nickhu@andestech.com, palmer@dabbelt.com, paulus@samba.org, paul.walmsley@sifive.com, rth@twiddle.net, shorne@gmail.com, stefan.kristiansson@saunalahti.fi, tsbogend@alpha.franken.de, vgupta@synopsys.com, ysato@users.sourceforge.jp Subject: [PATCH 25/33] locking/atomic: openrisc: move to ARCH_ATOMIC Date: Mon, 10 May 2021 10:37:45 +0100 Message-Id: <20210510093753.40683-26-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20210510093753.40683-1-mark.rutland@arm.com> References: <20210510093753.40683-1-mark.rutland@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates openrisc to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Jonas Bonn Cc: Peter Zijlstra Cc: Stafford Horne Cc: Stefan Kristiansson Cc: Will Deacon --- arch/openrisc/Kconfig | 1 + arch/openrisc/include/asm/atomic.h | 42 ++++++++++++++++++++----------------- arch/openrisc/include/asm/cmpxchg.h | 4 ++-- 3 files changed, 26 insertions(+), 21 deletions(-) diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig index 591acc5990dc..8c50bc9674f5 100644 --- a/arch/openrisc/Kconfig +++ b/arch/openrisc/Kconfig @@ -7,6 +7,7 @@ config OPENRISC def_bool y select ARCH_32BIT_OFF_T + select ARCH_ATOMIC select ARCH_HAS_DMA_SET_UNCACHED select ARCH_HAS_DMA_CLEAR_UNCACHED select ARCH_HAS_SYNC_DMA_FOR_DEVICE diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h index cb86970d3859..326167e4783a 100644 --- a/arch/openrisc/include/asm/atomic.h +++ b/arch/openrisc/include/asm/atomic.h @@ -13,7 +13,7 @@ /* Atomically perform op with v->counter and i */ #define ATOMIC_OP(op) \ -static inline void atomic_##op(int i, atomic_t *v) \ +static inline void arch_atomic_##op(int i, atomic_t *v) \ { \ int tmp; \ \ @@ -30,7 +30,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ /* Atomically perform op with v->counter and i, return the result */ #define ATOMIC_OP_RETURN(op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int arch_atomic_##op##_return(int i, atomic_t *v) \ { \ int tmp; \ \ @@ -49,7 +49,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ /* Atomically perform op with v->counter and i, return orig v->counter */ #define ATOMIC_FETCH_OP(op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ { \ int tmp, old; \ \ @@ -75,6 +75,8 @@ ATOMIC_FETCH_OP(and) ATOMIC_FETCH_OP(or) ATOMIC_FETCH_OP(xor) +ATOMIC_OP(add) +ATOMIC_OP(sub) ATOMIC_OP(and) ATOMIC_OP(or) ATOMIC_OP(xor) @@ -83,16 +85,18 @@ ATOMIC_OP(xor) #undef ATOMIC_OP_RETURN #undef ATOMIC_OP -#define atomic_add_return atomic_add_return -#define atomic_sub_return atomic_sub_return -#define atomic_fetch_add atomic_fetch_add -#define atomic_fetch_sub atomic_fetch_sub -#define atomic_fetch_and atomic_fetch_and -#define atomic_fetch_or atomic_fetch_or -#define atomic_fetch_xor atomic_fetch_xor -#define atomic_and atomic_and -#define atomic_or atomic_or -#define atomic_xor atomic_xor +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor +#define arch_atomic_add arch_atomic_add +#define arch_atomic_sub arch_atomic_sub +#define arch_atomic_and arch_atomic_and +#define arch_atomic_or arch_atomic_or +#define arch_atomic_xor arch_atomic_xor /* * Atomically add a to v->counter as long as v is not already u. @@ -100,7 +104,7 @@ ATOMIC_OP(xor) * * This is often used through atomic_inc_not_zero() */ -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) +static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int old, tmp; @@ -119,14 +123,14 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) return old; } -#define atomic_fetch_add_unless atomic_fetch_add_unless +#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless -#define atomic_read(v) READ_ONCE((v)->counter) -#define atomic_set(v,i) WRITE_ONCE((v)->counter, (i)) +#define arch_atomic_read(v) READ_ONCE((v)->counter) +#define arch_atomic_set(v,i) WRITE_ONCE((v)->counter, (i)) #include -#define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) -#define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), (old), (new))) +#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (v))) +#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (old), (new))) #endif /* __ASM_OPENRISC_ATOMIC_H */ diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h index f9cd43a39d72..79fd16162ccb 100644 --- a/arch/openrisc/include/asm/cmpxchg.h +++ b/arch/openrisc/include/asm/cmpxchg.h @@ -132,7 +132,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, } } -#define cmpxchg(ptr, o, n) \ +#define arch_cmpxchg(ptr, o, n) \ ({ \ (__typeof__(*(ptr))) __cmpxchg((ptr), \ (unsigned long)(o), \ @@ -161,7 +161,7 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long with, } } -#define xchg(ptr, with) \ +#define arch_xchg(ptr, with) \ ({ \ (__typeof__(*(ptr))) __xchg((ptr), \ (unsigned long)(with), \ -- 2.11.0