Received: by 10.192.165.148 with SMTP id m20csp3085851imm; Mon, 7 May 2018 06:27:58 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoflQ0V8HV2QWGkmWD8VZ4uHJKc0UJmTOcun/2wj5zs81zZ7UvB5UvOEzQcuARnSYoBPfIO X-Received: by 10.98.153.215 with SMTP id t84mr25931702pfk.252.1525699678300; Mon, 07 May 2018 06:27:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525699678; cv=none; d=google.com; s=arc-20160816; b=JNeYWyYdPBrYPr5I7qEtgeLDiJIecnsNYCUJ6y3GZBaAH3+Y0vWsLZGwOmwz0LnzqG vY9WYUCQM8Aq9YHz+i8HWCu+iKE4xJHV32qZn3Y5P/oB9Uvh+izs8roHi314NHcQI7qg A+RzPLwKgLKAszKC7NIvd52d0X1uZjBHbej0wa8xhyFth8vj0WwJ0+V09calx3iQd6ZB umO+RfPdb6A9FnaLz5fOy1Yg/6KWC/BeR/M7Q7rU9iTl/gvm2nIooWUhguYToKB0e723 VmgQv5a1GJLAK6PM0j1JyLgV+r4mKMrtqWDAjS8Ok3ldvCy4o0odIqkHETAKTI3Zp8Bg cdoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Uhg8l1fqvfD/M8y/wV/bT7gt9g/s11Renclhp0kvu3w=; b=fBJeJjCOnOowOKLHsEjWsWw9WYNt5U1jRv7NKpK8/rBlg8qkbKs/8kQh0xUTEK1rIJ iMTH9VC4mCXd3bFbyeuP8X4er/mE0aOYGU5Cjja1pmkMQ/q/rXYJTrleuYRlIiCmTA2S JUsUJGFRLxLHWK4rR74N85QYK4k/vYxSq3S5im9yPT98IPfN7qWmlffu2rKpQjCTTTDf X5iiDpdS5Msc5PT8GG0Ud+PEHQDly2px3D3USzCmOnpwlDm4mkzgeO3cJ+yAwDPu7hMo QaJc4eY46JFxiBAWMYmhtk7EO8h9WKmxM0QeY0zB8rG9Fu+Z8/CSFMSphDzdF9ttOzFZ LOWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=eOSouOWc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p26-v6si13924861pge.517.2018.05.07.06.27.44; Mon, 07 May 2018 06:27:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=eOSouOWc; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752316AbeEGN07 (ORCPT + 99 others); Mon, 7 May 2018 09:26:59 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:37375 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751860AbeEGN05 (ORCPT ); Mon, 7 May 2018 09:26:57 -0400 Received: by mail-wm0-f65.google.com with SMTP id l1-v6so15436193wmb.2 for ; Mon, 07 May 2018 06:26:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Uhg8l1fqvfD/M8y/wV/bT7gt9g/s11Renclhp0kvu3w=; b=eOSouOWcClDS2tjcIdOAOV2w47zJ5C497JW6OD1TQfzXY+gG7qw4Q7T4/aiN1o4MUz eorpJBfxKfPfny8O1ht0ZnKVZB45sBR+OT3kkY+DF5bv7BJ3ZPJi+o5BEiWHJB4MoOr0 wZ+KKhyW66i9qGJOLrmwt92Jr9AqkPSx0Z+KT+pqBfiY/5Op9ZgxIP3kBQ9/Y50RTo07 CZ34jXJnnU0dSxJeDpfyUoxB8ME6WPoK4VfLPURwkeh1MwnzW7vRpeU4EzVjRgsjUr/Q EwVRSdkeBB5mIJwhyQKlM9NX8Ndi1V+MEU8FknqFGkSkw8on+Md37pykmgCyglY70hRT bydA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Uhg8l1fqvfD/M8y/wV/bT7gt9g/s11Renclhp0kvu3w=; b=eCJrY/1dvV8TijlNhVRUPm9WZTI+Oe9dvOzyCQ6VicZVtNz+pIuUz4gDJB/W9AXzWB PnIS3vk4XvN/UW5uvPi84sN6UFTwWJ6RHrwYNT8DTu63CM7KUJqiOQNWkZA0ByookuLU bFbc8Ecezk3B6yYpsYsbLNQytAkEVi6xhtv5v3pbAv6fF4Kp4Xr/QRm9ZAvouKV/pwBE fm/BodeWCxqGjolpLcaBe4+B2YtaZW4lQ2GX/E1MjPcVg7GxBnUFNcomTc4QBUFkHPdB eiM8ldG2LdwxIu/aW6/kgB+Ac7f9zgRisiWM8WfvCpLF+NCnygBX9zFOODw5tSQV8inR C5hg== X-Gm-Message-State: ALQs6tA9Mnx8gGzix1gHh6TfzTmATuyxbCM7aOo3ejxHnDdNuYXVm/vs N8l6lRqzOvTkct1WMi8epv8= X-Received: by 2002:a50:b6e2:: with SMTP id f31-v6mr49889376ede.23.1525699616477; Mon, 07 May 2018 06:26:56 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id i22-v6sm11255243eds.28.2018.05.07.06.26.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 07 May 2018 06:26:55 -0700 (PDT) Received: from compute7.internal (compute7.nyi.internal [10.202.2.47]) by mailauth.nyi.internal (Postfix) with ESMTP id 1FB1A22811; Mon, 7 May 2018 09:26:53 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute7.internal (MEProxy); Mon, 07 May 2018 09:26:53 -0400 X-ME-Sender: Received: from localhost (unknown [45.32.128.109]) by mail.messagingengine.com (Postfix) with ESMTPA id 7B396102D9; Mon, 7 May 2018 09:26:52 -0400 (EDT) From: Boqun Feng To: mingo@kernel.org, tglx@linutronix.de, hpa@zytor.com, linux-kernel@vger.kernel.org Cc: Boqun Feng , Linus Torvalds , Mark Rutland , Peter Zijlstra , aryabinin@virtuozzo.com, catalin.marinas@arm.com, dvyukov@google.com, linux-arm-kernel@lists.infradead.org, will.deacon@arm.com Subject: [PATCH v2] locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs Date: Mon, 7 May 2018 21:31:14 +0800 Message-Id: <20180507133114.10106-1-boqun.feng@gmail.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move PowerPC's __op_{acqurie,release}() from atomic.h to cmpxchg.h (in arch/powerpc/include/asm), plus use them to define these two methods: #define cmpxchg_release __op_release(cmpxchg, __VA_ARGS__); #define cmpxchg64_release __op_release(cmpxchg64, __VA_ARGS__); ... the idea is to generate all these methods in cmpxchg.h and to define the full array of atomic primitives, including the cmpxchg_release() methods which were defined by the generic code before. Also define the atomic[64]_() variants explicitly. This ensures that all these low level cmpxchg APIs are defined in PowerPC headers, with no generic header fallbacks. Also remove the duplicate definitions of atomic_xchg() and atomic64_xchg() in asm/atomic.h: they could be generated via the generic atomic.h header using _relaxed() primitives. This helps ppc adopt the upcoming change of generic atomic.h header. No change in functionality or code generation. Signed-off-by: Boqun Feng Cc: Linus Torvalds Cc: Mark Rutland Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: aryabinin@virtuozzo.com Cc: catalin.marinas@arm.com Cc: dvyukov@google.com Cc: linux-arm-kernel@lists.infradead.org Cc: will.deacon@arm.com --- v1 --> v2: remove duplicate definitions for atomic*_xchg() for future change Ingo, I also remove the link and your SoB because I think your bot could add them automatically. arch/powerpc/include/asm/atomic.h | 24 ++++-------------------- arch/powerpc/include/asm/cmpxchg.h | 24 ++++++++++++++++++++++++ 2 files changed, 28 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 682b3e6a1e21..583837a41bcc 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -13,24 +13,6 @@ #define ATOMIC_INIT(i) { (i) } -/* - * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with - * a "bne-" instruction at the end, so an isync is enough as a acquire barrier - * on the platform without lwsync. - */ -#define __atomic_op_acquire(op, args...) \ -({ \ - typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ - __asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory"); \ - __ret; \ -}) - -#define __atomic_op_release(op, args...) \ -({ \ - __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory"); \ - op##_relaxed(args); \ -}) - static __inline__ int atomic_read(const atomic_t *v) { int t; @@ -213,8 +195,9 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v) cmpxchg_relaxed(&((v)->counter), (o), (n)) #define atomic_cmpxchg_acquire(v, o, n) \ cmpxchg_acquire(&((v)->counter), (o), (n)) +#define atomic_cmpxchg_release(v, o, n) \ + cmpxchg_release(&((v)->counter), (o), (n)) -#define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) /** @@ -519,8 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) cmpxchg_relaxed(&((v)->counter), (o), (n)) #define atomic64_cmpxchg_acquire(v, o, n) \ cmpxchg_acquire(&((v)->counter), (o), (n)) +#define atomic64_cmpxchg_release(v, o, n) \ + cmpxchg_release(&((v)->counter), (o), (n)) -#define atomic64_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic64_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) /** diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h index 9b001f1f6b32..e27a612b957f 100644 --- a/arch/powerpc/include/asm/cmpxchg.h +++ b/arch/powerpc/include/asm/cmpxchg.h @@ -8,6 +8,24 @@ #include #include +/* + * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with + * a "bne-" instruction at the end, so an isync is enough as a acquire barrier + * on the platform without lwsync. + */ +#define __atomic_op_acquire(op, args...) \ +({ \ + typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ + __asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory"); \ + __ret; \ +}) + +#define __atomic_op_release(op, args...) \ +({ \ + __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory"); \ + op##_relaxed(args); \ +}) + #ifdef __BIG_ENDIAN #define BITOFF_CAL(size, off) ((sizeof(u32) - size - off) * BITS_PER_BYTE) #else @@ -512,6 +530,9 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new, (unsigned long)_o_, (unsigned long)_n_, \ sizeof(*(ptr))); \ }) + +#define cmpxchg_release(...) __atomic_op_release(cmpxchg, __VA_ARGS__) + #ifdef CONFIG_PPC64 #define cmpxchg64(ptr, o, n) \ ({ \ @@ -533,6 +554,9 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new, BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ cmpxchg_acquire((ptr), (o), (n)); \ }) + +#define cmpxchg64_release(...) __atomic_op_release(cmpxchg64, __VA_ARGS__) + #else #include #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n)) -- 2.16.2