Received: by 10.192.165.148 with SMTP id m20csp965816imm; Sat, 5 May 2018 01:55:17 -0700 (PDT) X-Google-Smtp-Source: AB8JxZok0hHcRRH+Nh4aQHnChl2Xnst2JlOk9yYIt5Bp5fcPpHd991Lu+w1m+NQIMbdCB2uy/rG3 X-Received: by 2002:a17:902:a711:: with SMTP id w17-v6mr30938751plq.292.1525510517612; Sat, 05 May 2018 01:55:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525510517; cv=none; d=google.com; s=arc-20160816; b=S80uyYQeirJgi9nfk9tPZXDs9DilAd22IT4Q8EqBhkTEk2PtRwmCl5/ruZIGoOWw2J ftrN3Th7eaRyaInQwjpCOfR5tVxGlmcNBDHFk5nxxcZpXCZ4bFRcLLtdaa94TItgtjLQ do0troMjvQqQOsRAaR6jDrfwBYfhn/ctlFkB5t5YUbzYFJgMNverpFOGbIZBnKNlfXBH JO4r5LecdIuiUZNEyP/Ak7SPqhYeYp4XT1IH8x12bDCYiZRTG44Al7Vivcq0rRq4Y8G6 FHpONr7v75YWqAwH116O76CPoR6awNo2oTmQ9sWjR6dICERB3eL+abYOt6XcM9A5QAFP 3YuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=2biQcIjcztEaaAxv5CpakSJqTTvXeoN0OE8IB/7A+oA=; b=szVeGepMFKNXZBDAisf3+67LFqPyevNgwE7+tltslQH9U6y/XcS+fF16LG6UPcg3NZ mJ0vn3SRxsDGV5dfF+tEzKP1l31fin7W2f9R0nKalOs7Wob7C3+ci7vOtP7t+YQD7WER k/qnPL9XAkUXDKZpHJy49qM26difjfd30QNBgrx0+odBUHXleutC+3hlMOliwIS4gyex vb5murEUsowKZIQU6G89atW6x73LYqov2E/S7On8QRj/ky1g4bean8EieRqcLUb9JVmy L/7BVkKpG7TVXIolFSqwjV20J/0vm46l0fPW/MXCZB9b38axP+oBo4k/+SuJZ9YDOXhe qqgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=bFlJI7bI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 89-v6si3444961plb.154.2018.05.05.01.55.02; Sat, 05 May 2018 01:55:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=bFlJI7bI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751191AbeEEIyv (ORCPT + 99 others); Sat, 5 May 2018 04:54:51 -0400 Received: from mail-wr0-f194.google.com ([209.85.128.194]:36688 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750764AbeEEIyu (ORCPT ); Sat, 5 May 2018 04:54:50 -0400 Received: by mail-wr0-f194.google.com with SMTP id f2-v6so11410217wrm.3 for ; Sat, 05 May 2018 01:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=2biQcIjcztEaaAxv5CpakSJqTTvXeoN0OE8IB/7A+oA=; b=bFlJI7bIirOlk9kBlh3PqgXQ5WDy6YK+Av5MozK/24JwM6PKMSokepPLTukFH4fy/f kIV4YStSTLtTTPOtRtT+d+vS5Lcpr8irNoFCPPWnOC+k3CKRCwwwpLm8snLyKgLeBCM2 Yx+/d/N0s+TK2XkRV0VyP6cKJNlOI4OM9G896ynGo07ar4j7dhvPGD2ObGZsyGDwczvP cbOzzzQF35AArimjHR93qNB0ldwiExeZemD8zZ4PhvGaDWgzwXT+iEIdhvZgm1YzLhpz ag6JJTCh55hlqRlnhByqvuP/BGulpltxJ5wIhbXXPJHdhOwJStswpB8e+xTiLhAEzZOv awZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent; bh=2biQcIjcztEaaAxv5CpakSJqTTvXeoN0OE8IB/7A+oA=; b=WXBJoBOgWdbu5hPWm9+1BG81GIAOvbEV3F8/oVAuf24KAZmTZheIJLRgLAIX0JvQbH uILJYILbkfQblmoUnKyEAUL1tn/CQN8200ptnP0uGahprWoSiKz8F0lEuXDvZ5wRxBO6 Tij+/60JjghyBCuP9y6RKT4tjPYXExI2OTW7gLzZsbS9Oc4eoWb6NLWXSCD2LM5EBu0i MuWAtjEoSoPK9XGC1bUzb/BLsXdHXsp9xf0CXq0gkIAc501xaAV9c0nh1Eu6NyqI655+ EZ2T935rX+vxLTXxZaEjAgWJhlzG+woTfC7ZBk0Cn79mSV8kPXG3CL2BRI1iHH9gC5O5 /nIA== X-Gm-Message-State: ALQs6tD9OWV25wl5q9LrJBo3jjJ3Jkt4eEpcVVg6d6N+Hl4BO+66TLp1 kMu48mM01k2sTpzP5tbaSxY= X-Received: by 2002:adf:e3c1:: with SMTP id k1-v6mr25181769wrm.94.1525510488848; Sat, 05 May 2018 01:54:48 -0700 (PDT) Received: from gmail.com (2E8B0CD5.catv.pool.telekom.hu. [46.139.12.213]) by smtp.gmail.com with ESMTPSA id v12-v6sm2617110wmc.35.2018.05.05.01.54.47 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sat, 05 May 2018 01:54:48 -0700 (PDT) Date: Sat, 5 May 2018 10:54:45 +0200 From: Ingo Molnar To: Mark Rutland Cc: Peter Zijlstra , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, will.deacon@arm.com, Linus Torvalds , Andrew Morton , "Paul E. McKenney" , Peter Zijlstra , Thomas Gleixner Subject: [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions Message-ID: <20180505085445.cmdnqh6xpnpfoqzb@gmail.com> References: <20180504173937.25300-1-mark.rutland@arm.com> <20180504173937.25300-2-mark.rutland@arm.com> <20180504180105.GS12217@hirez.programming.kicks-ass.net> <20180504180909.dnhfflibjwywnm4l@lakrids.cambridge.arm.com> <20180505081100.nsyrqrpzq2vd27bk@gmail.com> <20180505083635.622xmcvb42dw5xxh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180505083635.622xmcvb42dw5xxh@gmail.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Ingo Molnar wrote: > Note that the simplest definition block is now: > > #ifndef atomic_cmpxchg_relaxed > # define atomic_cmpxchg_relaxed atomic_cmpxchg > # define atomic_cmpxchg_acquire atomic_cmpxchg > # define atomic_cmpxchg_release atomic_cmpxchg > #else > # ifndef atomic_cmpxchg > # define atomic_cmpxchg(...) __atomic_op_fence(atomic_cmpxchg, __VA_ARGS__) > # define atomic_cmpxchg_acquire(...) __atomic_op_acquire(atomic_cmpxchg, __VA_ARGS__) > # define atomic_cmpxchg_release(...) __atomic_op_release(atomic_cmpxchg, __VA_ARGS__) > # endif > #endif > > ... which is very readable! > > The total linecount reduction of the two patches is pretty significant as well: > > include/linux/atomic.h | 1063 ++++++++++++++++-------------------------------- > 1 file changed, 343 insertions(+), 720 deletions(-) BTW., I noticed two asymmetries while cleaning up this code: ==============> #ifdef atomic_andnot #ifndef atomic_fetch_andnot_relaxed # define atomic_fetch_andnot_relaxed atomic_fetch_andnot # define atomic_fetch_andnot_acquire atomic_fetch_andnot # define atomic_fetch_andnot_release atomic_fetch_andnot #else # ifndef atomic_fetch_andnot # define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) # define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) # define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) # endif #endif #endif /* atomic_andnot */ ... #ifdef atomic64_andnot #ifndef atomic64_fetch_andnot_relaxed # define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot # define atomic64_fetch_andnot_acquire atomic64_fetch_andnot # define atomic64_fetch_andnot_release atomic64_fetch_andnot #else # ifndef atomic64_fetch_andnot # define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) # define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) # define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) # endif #endif #endif /* atomic64_andnot */ <============== Why do these two API groups have an outer condition, i.e.: #ifdef atomic_andnot ... #endif /* atomic_andnot */ ... #ifdef atomic64_andnot ... #endif /* atomic64_andnot */ because the base APIs themselves are optional and have a default implementation: #ifndef atomic_andnot ... #endif ... #ifndef atomic64_andnot ... #endif I think it's overall cleaner if we combine them into continous blocks, defining all variants of an API group in a single place: #ifdef atomic_andnot #else #endif etc. The patch below implements this. Thanks, Ingo ===================> From f5efafa83af8c46b9e81b010b46caeeadb450179 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Sat, 5 May 2018 10:46:41 +0200 Subject: [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions The atomic_andnot() and atomic64_andnot() are defined in 4 separate groups spred out in the atomic.h header: #ifdef atomic_andnot ... #endif /* atomic_andnot */ ... #ifndef atomic_andnot ... #endif ... #ifdef atomic64_andnot ... #endif /* atomic64_andnot */ ... #ifndef atomic64_andnot ... #endif Combine them into unify them into two groups: #ifdef atomic_andnot #else #endif ... #ifdef atomic64_andnot #else #endif So that one API group is defined in a single place within the header. Cc: Peter Zijlstra Cc: Linus Torvalds Cc: Andrew Morton Cc: Thomas Gleixner Cc: Paul E. McKenney Cc: Will Deacon Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar --- include/linux/atomic.h | 72 +++++++++++++++++++++++++------------------------- 1 file changed, 36 insertions(+), 36 deletions(-) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 352ecc72d7f5..1176cf7c6f03 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -205,22 +205,6 @@ # endif #endif -#ifdef atomic_andnot - -#ifndef atomic_fetch_andnot_relaxed -# define atomic_fetch_andnot_relaxed atomic_fetch_andnot -# define atomic_fetch_andnot_acquire atomic_fetch_andnot -# define atomic_fetch_andnot_release atomic_fetch_andnot -#else -# ifndef atomic_fetch_andnot -# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) -# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) -# endif -#endif - -#endif /* atomic_andnot */ - #ifndef atomic_fetch_xor_relaxed # define atomic_fetch_xor_relaxed atomic_fetch_xor # define atomic_fetch_xor_acquire atomic_fetch_xor @@ -338,7 +322,22 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) # define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) #endif -#ifndef atomic_andnot +#ifdef atomic_andnot + +#ifndef atomic_fetch_andnot_relaxed +# define atomic_fetch_andnot_relaxed atomic_fetch_andnot +# define atomic_fetch_andnot_acquire atomic_fetch_andnot +# define atomic_fetch_andnot_release atomic_fetch_andnot +#else +# ifndef atomic_fetch_andnot +# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) +# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) +# endif +#endif + +#else /* !atomic_andnot: */ + static inline void atomic_andnot(int i, atomic_t *v) { atomic_and(~i, v); @@ -363,7 +362,8 @@ static inline int atomic_fetch_andnot_release(int i, atomic_t *v) { return atomic_fetch_and_release(~i, v); } -#endif + +#endif /* !atomic_andnot */ /** * atomic_inc_not_zero_hint - increment if not null @@ -600,22 +600,6 @@ static inline int atomic_dec_if_positive(atomic_t *v) # endif #endif -#ifdef atomic64_andnot - -#ifndef atomic64_fetch_andnot_relaxed -# define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot -# define atomic64_fetch_andnot_acquire atomic64_fetch_andnot -# define atomic64_fetch_andnot_release atomic64_fetch_andnot -#else -# ifndef atomic64_fetch_andnot -# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) -# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) -# endif -#endif - -#endif /* atomic64_andnot */ - #ifndef atomic64_fetch_xor_relaxed # define atomic64_fetch_xor_relaxed atomic64_fetch_xor # define atomic64_fetch_xor_acquire atomic64_fetch_xor @@ -672,7 +656,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) # define atomic64_try_cmpxchg_release atomic64_try_cmpxchg #endif -#ifndef atomic64_andnot +#ifdef atomic64_andnot + +#ifndef atomic64_fetch_andnot_relaxed +# define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot +# define atomic64_fetch_andnot_acquire atomic64_fetch_andnot +# define atomic64_fetch_andnot_release atomic64_fetch_andnot +#else +# ifndef atomic64_fetch_andnot +# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) +# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) +# endif +#endif + +#else /* !atomic64_andnot: */ + static inline void atomic64_andnot(long long i, atomic64_t *v) { atomic64_and(~i, v); @@ -697,7 +696,8 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v { return atomic64_fetch_and_release(~i, v); } -#endif + +#endif /* !atomic64_andnot */ #define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c)) #define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))