Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp5433088pxj; Wed, 26 May 2021 10:21:35 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzcyCnzoIPmcIVajxtro1cuVHCkIa0YtAi+cWM9RJYcUP0ajeZfUKktPwtzl0PclBC5fEhU X-Received: by 2002:a5e:c703:: with SMTP id f3mr26749214iop.107.1622049695704; Wed, 26 May 2021 10:21:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622049695; cv=none; d=google.com; s=arc-20160816; b=yVwgDsoq8/UCjL4TwPePD2dmQ9WKgNBHVi1qqzFudezYJbL305Ew46HBk01QZLav6u wawaYx0ZChezQ8BBdAj1Cj8EW9KXSqzQN/VUsIfGCtEvaStlcXt1+AsD4rOb0rFWzeOg pFUCA8lAkQ4JMm46yUn9GR5RpHnvQFVd+t56W/MnzIFXC0XI2hE+F/BVe/98lywNGBHr xhbePN4owqfyL0cLWIxT0HfgJvKSHMN+guOJ51C6JiQOYP5b+XaORqZyny+usZADBq/e RsvNTi5Vv7neOfJ3pj8lSPBpuxUDTSFUV4HAYxw4On6BeOsPKgAakgmMG3ZoP3J62wsX cUaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:robot-unsubscribe :robot-id:message-id:mime-version:references:in-reply-to:cc:subject :to:reply-to:sender:from:dkim-signature:dkim-signature:date; bh=28QRtym177L2fnhGHGTbhy0O4Sr104tzJ9hIFYXcWvY=; b=FOiSWi7ZoBpYrfkBq2m+UicoSK0HhZ7j+n4fe8TO0UsGx4RQkkrvh/iHqOprwAe0vm 5QZg/Z+MOvXEsQjm98FVTi2mRMWlMh1FUQKStzg7Dry/2Q2pcDdU813/zRKGvFq7bJcN Do6ZcDqA+aJvZGLZrNS8oLarFXOO+ZBfYq6TI2J5IArxylhnOUWFJtJSBMsb4Yz0a5Bn EQghF8roSqO2BxbC1Q2M+4eDRppoyJ/ZDuNTPD9RcuV2NrPjdC/6zeRcX7vPoc5NdkBx 1DpBnS3l3tQd0aueZN1udFpSin5pRBXJC3l6pMhOhvKh4Cd/ltUMM+3O1O3f7Om19kvL bkQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=jVEQzgS1; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x14si22490659jat.74.2021.05.26.10.21.21; Wed, 26 May 2021 10:21:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=jVEQzgS1; dkim=neutral (no key) header.i=@linutronix.de; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234471AbhEZL13 (ORCPT + 99 others); Wed, 26 May 2021 07:27:29 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:54694 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234319AbhEZL0I (ORCPT ); Wed, 26 May 2021 07:26:08 -0400 Date: Wed, 26 May 2021 11:24:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1622028276; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=28QRtym177L2fnhGHGTbhy0O4Sr104tzJ9hIFYXcWvY=; b=jVEQzgS1r5dpNZf4vkqB/hI8X1E2v4uq0Ej/HD79lOkH8aLXDuLSjxH8rbk+EO3wQZ0qqf Qzg8aLEY64+sArd8C75sypGkscRDhwThpzPLO1s5OpOi82fIvHRtBwlCkdsNvU5TtjaXgM UEyb+aC2ZVlk5lVqy7J5mTYV6CDWz3qnZGWz0XtHjjuKmkF6Wk6WWcpP31GpPHOGxn0k8r /ZTIfvZ6HhrH4YxLQgy1xnMgbg8VMJNXdIk92jLsm4NMFKevG2Bw9rG3yJfgQAkPUG659M T86RTVYoaeUMG2aef9DxzC8EhE0y+GprNY0za8MQ0tGf70aOFlE96HcCdunbMw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1622028276; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=28QRtym177L2fnhGHGTbhy0O4Sr104tzJ9hIFYXcWvY=; b=0xxWRts60XNh2BMLCjWkvnMVHvxnm+45TtY3LReG9rWVTycrftmV4FBJhFnMpRz3wlUDUU /sVhLrdLSYOr/zAg== From: "tip-bot2 for Mark Rutland" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: locking/core] locking/atomic: hexagon: move to ARCH_ATOMIC Cc: Mark Rutland , Boqun Feng , Brian Cain , Peter Zijlstra , Will Deacon , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20210525140232.53872-19-mark.rutland@arm.com> References: <20210525140232.53872-19-mark.rutland@arm.com> MIME-Version: 1.0 Message-ID: <162202827563.29796.11232325717918546855.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the locking/core branch of tip: Commit-ID: 94b63eb6e131a7fe94f1c1eb8e10162931506176 Gitweb: https://git.kernel.org/tip/94b63eb6e131a7fe94f1c1eb8e10162931506176 Author: Mark Rutland AuthorDate: Tue, 25 May 2021 15:02:17 +01:00 Committer: Peter Zijlstra CommitterDate: Wed, 26 May 2021 13:20:51 +02:00 locking/atomic: hexagon: move to ARCH_ATOMIC We'd like all architectures to convert to ARCH_ATOMIC, as once all architectures are converted it will be possible to make significant cleanups to the atomics headers, and this will make it much easier to generically enable atomic functionality (e.g. debug logic in the instrumented wrappers). As a step towards that, this patch migrates hexagon to ARCH_ATOMIC. The arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code wraps these with optional instrumentation to provide the regular functions. Signed-off-by: Mark Rutland Cc: Boqun Feng Cc: Brian Cain Cc: Peter Zijlstra Cc: Will Deacon Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20210525140232.53872-19-mark.rutland@arm.com --- arch/hexagon/Kconfig | 1 + arch/hexagon/include/asm/atomic.h | 28 ++++++++++++++-------------- arch/hexagon/include/asm/cmpxchg.h | 4 ++-- 3 files changed, 17 insertions(+), 16 deletions(-) diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig index 44a4099..1368954 100644 --- a/arch/hexagon/Kconfig +++ b/arch/hexagon/Kconfig @@ -5,6 +5,7 @@ comment "Linux Kernel Configuration for Hexagon" config HEXAGON def_bool y select ARCH_32BIT_OFF_T + select ARCH_ATOMIC select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_NO_PREEMPT # Other pending projects/to-do items. diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h index 4ab895d..6e94f8d 100644 --- a/arch/hexagon/include/asm/atomic.h +++ b/arch/hexagon/include/asm/atomic.h @@ -14,7 +14,7 @@ /* Normal writes in our arch don't clear lock reservations */ -static inline void atomic_set(atomic_t *v, int new) +static inline void arch_atomic_set(atomic_t *v, int new) { asm volatile( "1: r6 = memw_locked(%0);\n" @@ -26,26 +26,26 @@ static inline void atomic_set(atomic_t *v, int new) ); } -#define atomic_set_release(v, i) atomic_set((v), (i)) +#define arch_atomic_set_release(v, i) arch_atomic_set((v), (i)) /** - * atomic_read - reads a word, atomically + * arch_atomic_read - reads a word, atomically * @v: pointer to atomic value * * Assumes all word reads on our architecture are atomic. */ -#define atomic_read(v) READ_ONCE((v)->counter) +#define arch_atomic_read(v) READ_ONCE((v)->counter) /** - * atomic_xchg - atomic + * arch_atomic_xchg - atomic * @v: pointer to memory to change * @new: new value (technically passed in a register -- see xchg) */ -#define atomic_xchg(v, new) (xchg(&((v)->counter), (new))) +#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new))) /** - * atomic_cmpxchg - atomic compare-and-exchange values + * arch_atomic_cmpxchg - atomic compare-and-exchange values * @v: pointer to value to change * @old: desired old value to match * @new: new value to put in @@ -61,7 +61,7 @@ static inline void atomic_set(atomic_t *v, int new) * * "old" is "expected" old val, __oldval is actual old value */ -static inline int atomic_cmpxchg(atomic_t *v, int old, int new) +static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new) { int __oldval; @@ -81,7 +81,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) } #define ATOMIC_OP(op) \ -static inline void atomic_##op(int i, atomic_t *v) \ +static inline void arch_atomic_##op(int i, atomic_t *v) \ { \ int output; \ \ @@ -97,7 +97,7 @@ static inline void atomic_##op(int i, atomic_t *v) \ } \ #define ATOMIC_OP_RETURN(op) \ -static inline int atomic_##op##_return(int i, atomic_t *v) \ +static inline int arch_atomic_##op##_return(int i, atomic_t *v) \ { \ int output; \ \ @@ -114,7 +114,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \ } #define ATOMIC_FETCH_OP(op) \ -static inline int atomic_fetch_##op(int i, atomic_t *v) \ +static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ { \ int output, val; \ \ @@ -148,7 +148,7 @@ ATOMIC_OPS(xor) #undef ATOMIC_OP /** - * atomic_fetch_add_unless - add unless the number is a given value + * arch_atomic_fetch_add_unless - add unless the number is a given value * @v: pointer to value * @a: amount to add * @u: unless value is equal to u @@ -157,7 +157,7 @@ ATOMIC_OPS(xor) * */ -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) +static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) { int __oldval; register int tmp; @@ -180,6 +180,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) ); return __oldval; } -#define atomic_fetch_add_unless atomic_fetch_add_unless +#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless #endif diff --git a/arch/hexagon/include/asm/cmpxchg.h b/arch/hexagon/include/asm/cmpxchg.h index 92b8a02..cdb705e 100644 --- a/arch/hexagon/include/asm/cmpxchg.h +++ b/arch/hexagon/include/asm/cmpxchg.h @@ -42,7 +42,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, * Atomically swap the contents of a register with memory. Should be atomic * between multiple CPU's and within interrupts on the same CPU. */ -#define xchg(ptr, v) ((__typeof__(*(ptr)))__xchg((unsigned long)(v), (ptr), \ +#define arch_xchg(ptr, v) ((__typeof__(*(ptr)))__xchg((unsigned long)(v), (ptr), \ sizeof(*(ptr)))) /* @@ -51,7 +51,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, * variable casting. */ -#define cmpxchg(ptr, old, new) \ +#define arch_cmpxchg(ptr, old, new) \ ({ \ __typeof__(ptr) __ptr = (ptr); \ __typeof__(*(ptr)) __old = (old); \