Received: by 10.192.165.148 with SMTP id m20csp2114579imm; Sun, 6 May 2018 07:16:28 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrio4/tm1ZEa/DFF9waKpcsXdjdr22vSmX5bFYd/oOerD6rEjoIabohFf6pLSlX87C+t6LL X-Received: by 10.98.10.137 with SMTP id 9mr21511749pfk.112.1525616188193; Sun, 06 May 2018 07:16:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525616187; cv=none; d=google.com; s=arc-20160816; b=Soe16BDkw9MDnhr461hi8MUeZDC+ZUrHBj9vTgvRV3QeveXJuQFgVUOYQxwmwLRyHQ fwCwQ+YfRl/h6DILU8kNKCBt2upw0RBDF0DJWTnGeP96D+cTrsjtLPL9k1UwMB/kQgv0 QA9az4eBriLi7mm0ahXhucr1qpHHV4gdgRdUc8XUUNPCcFo2rq+37H2qsBDQlo6J5t7r tgZDg+7ghk5QtHdVzGf0p+Pj2kcktNwZbCltfodHsFYWjA6anDoPpSbSOXWDO26ghb7N 7ftXD8CAV7kWLJuO49O4BaypT6f/8VsRAFRo1qiYt0PrUdOh1imxFmUeMpp4Dv3lK6eV uBNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=8/fxCI9xc7MtSxGPf1R8dg3PNJJDB2zoS0TiDtgJnjc=; b=h7N2g+PAvSCNl7XoNWb5RQCcZrFtTECaoqbc12/m2ZO7FeO2LU8sea0LXl4LydAnFC zle7aiNTOCUQQFwdzXOe01geVuSBJKJQ0jxzUPNNipWgkN7yhYCzcBw7FdxofMZYyw/u hGQNyjEPhGX66PwyZ3NgLQCEyHVLyy6KXdW8gVT+bHAvBEf+WYUN/z2ZTfJcFWF1S7sh jkf5CJ9v6TnDUSCkxJ97OeIPypzoYxtSuNxrD7w8TnJYt0VtKAPJdzpAP1+0A18JpqG0 R7eaiFJi80+RVLigyUcvLRmOai11JAHr5+NxzWDzPlkEgTkYCZSJW0SbVovf/FWt7l6M bCLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@amarulasolutions.com header.s=google header.b=R8ETLhkM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f64si20316447pfa.252.2018.05.06.07.16.13; Sun, 06 May 2018 07:16:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@amarulasolutions.com header.s=google header.b=R8ETLhkM; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751692AbeEFOQA (ORCPT + 99 others); Sun, 6 May 2018 10:16:00 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:52615 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751327AbeEFOP7 (ORCPT ); Sun, 6 May 2018 10:15:59 -0400 Received: by mail-wm0-f66.google.com with SMTP id w194so10223078wmf.2 for ; Sun, 06 May 2018 07:15:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amarulasolutions.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=8/fxCI9xc7MtSxGPf1R8dg3PNJJDB2zoS0TiDtgJnjc=; b=R8ETLhkMP++VVNFTC7O+SPGzJrhVa8FWmU9JOTlUxw7OYHQk8I/yT9H+MQgF33Oaf2 trzJ7P2sY2aS5tHqW+2tw3e5yJHg/gqJ3vimHt8dXAB7g4Ax5TNKg7o9YgxW6Qs1jIyQ 5WikZwC07G8pD2EN0yqjudqfdF0nKWpt9nLCQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=8/fxCI9xc7MtSxGPf1R8dg3PNJJDB2zoS0TiDtgJnjc=; b=Auif1dE+ZZ3oZZUbio6/ItSVrRlG39mdRXRLWcVr4YA7Xu5LDc9jA4qBPWCLxLN8Zg W4i1Z8uEQu0HhEepNFHn4uU6f+uK8siDvYXMWyL1elLLzjQpQ0ZEOCXzw4ca7kYfdPaT Hiqy0TxfPeXjeGJfgsO1UcVUgGT3Th0vjEQ8n3Wq5wbdd5YsDcmARk6wwj5fSB3DQ15x l3XQpGct6bGih4a+UAjzd6Pbbct7yOZdnx3dcp1eUrGk7HGLMLbJbhPnWPRmSRwjC4ch Xcvs7jhqcRetPOqbR2+HCCG4+vyJc2t2HDGRSqOInUuZ5l+L8hSNZaVXWK2OJYAg2bWZ NodA== X-Gm-Message-State: ALQs6tB4tywquWsaL+vCy9qXP+gdwy5XUD0RzN8C62topVA35/6p1S5k 6gPPSt2864fC86/bhw44mKrDBQ== X-Received: by 10.28.170.84 with SMTP id t81mr21906894wme.130.1525616158238; Sun, 06 May 2018 07:15:58 -0700 (PDT) Received: from andrea ([94.230.152.15]) by smtp.gmail.com with ESMTPSA id g7-v6sm23712164wrb.60.2018.05.06.07.15.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 May 2018 07:15:57 -0700 (PDT) Date: Sun, 6 May 2018 16:15:52 +0200 From: Andrea Parri To: Ingo Molnar Cc: Mark Rutland , Peter Zijlstra , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com, boqun.feng@gmail.com, catalin.marinas@arm.com, dvyukov@google.com, will.deacon@arm.com, Linus Torvalds , Andrew Morton , "Paul E. McKenney" , Peter Zijlstra , Thomas Gleixner Subject: Re: [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions Message-ID: <20180506141552.GA28937@andrea> References: <20180504173937.25300-1-mark.rutland@arm.com> <20180504173937.25300-2-mark.rutland@arm.com> <20180504180105.GS12217@hirez.programming.kicks-ass.net> <20180504180909.dnhfflibjwywnm4l@lakrids.cambridge.arm.com> <20180505081100.nsyrqrpzq2vd27bk@gmail.com> <20180505083635.622xmcvb42dw5xxh@gmail.com> <20180505085445.cmdnqh6xpnpfoqzb@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180505085445.cmdnqh6xpnpfoqzb@gmail.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Ingo, > From f5efafa83af8c46b9e81b010b46caeeadb450179 Mon Sep 17 00:00:00 2001 > From: Ingo Molnar > Date: Sat, 5 May 2018 10:46:41 +0200 > Subject: [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions > > The atomic_andnot() and atomic64_andnot() are defined in 4 separate groups > spred out in the atomic.h header: > > #ifdef atomic_andnot > ... > #endif /* atomic_andnot */ > ... > #ifndef atomic_andnot > ... > #endif > ... > #ifdef atomic64_andnot > ... > #endif /* atomic64_andnot */ > ... > #ifndef atomic64_andnot > ... > #endif > > Combine them into unify them into two groups: Nit: "Combine them into unify them into" Andrea > > #ifdef atomic_andnot > #else > #endif > > ... > > #ifdef atomic64_andnot > #else > #endif > > So that one API group is defined in a single place within the header. > > Cc: Peter Zijlstra > Cc: Linus Torvalds > Cc: Andrew Morton > Cc: Thomas Gleixner > Cc: Paul E. McKenney > Cc: Will Deacon > Cc: linux-kernel@vger.kernel.org > Signed-off-by: Ingo Molnar > --- > include/linux/atomic.h | 72 +++++++++++++++++++++++++------------------------- > 1 file changed, 36 insertions(+), 36 deletions(-) > > diff --git a/include/linux/atomic.h b/include/linux/atomic.h > index 352ecc72d7f5..1176cf7c6f03 100644 > --- a/include/linux/atomic.h > +++ b/include/linux/atomic.h > @@ -205,22 +205,6 @@ > # endif > #endif > > -#ifdef atomic_andnot > - > -#ifndef atomic_fetch_andnot_relaxed > -# define atomic_fetch_andnot_relaxed atomic_fetch_andnot > -# define atomic_fetch_andnot_acquire atomic_fetch_andnot > -# define atomic_fetch_andnot_release atomic_fetch_andnot > -#else > -# ifndef atomic_fetch_andnot > -# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) > -# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) > -# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) > -# endif > -#endif > - > -#endif /* atomic_andnot */ > - > #ifndef atomic_fetch_xor_relaxed > # define atomic_fetch_xor_relaxed atomic_fetch_xor > # define atomic_fetch_xor_acquire atomic_fetch_xor > @@ -338,7 +322,22 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u) > # define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) > #endif > > -#ifndef atomic_andnot > +#ifdef atomic_andnot > + > +#ifndef atomic_fetch_andnot_relaxed > +# define atomic_fetch_andnot_relaxed atomic_fetch_andnot > +# define atomic_fetch_andnot_acquire atomic_fetch_andnot > +# define atomic_fetch_andnot_release atomic_fetch_andnot > +#else > +# ifndef atomic_fetch_andnot > +# define atomic_fetch_andnot(...) __atomic_op_fence(atomic_fetch_andnot, __VA_ARGS__) > +# define atomic_fetch_andnot_acquire(...) __atomic_op_acquire(atomic_fetch_andnot, __VA_ARGS__) > +# define atomic_fetch_andnot_release(...) __atomic_op_release(atomic_fetch_andnot, __VA_ARGS__) > +# endif > +#endif > + > +#else /* !atomic_andnot: */ > + > static inline void atomic_andnot(int i, atomic_t *v) > { > atomic_and(~i, v); > @@ -363,7 +362,8 @@ static inline int atomic_fetch_andnot_release(int i, atomic_t *v) > { > return atomic_fetch_and_release(~i, v); > } > -#endif > + > +#endif /* !atomic_andnot */ > > /** > * atomic_inc_not_zero_hint - increment if not null > @@ -600,22 +600,6 @@ static inline int atomic_dec_if_positive(atomic_t *v) > # endif > #endif > > -#ifdef atomic64_andnot > - > -#ifndef atomic64_fetch_andnot_relaxed > -# define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot > -# define atomic64_fetch_andnot_acquire atomic64_fetch_andnot > -# define atomic64_fetch_andnot_release atomic64_fetch_andnot > -#else > -# ifndef atomic64_fetch_andnot > -# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) > -# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) > -# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) > -# endif > -#endif > - > -#endif /* atomic64_andnot */ > - > #ifndef atomic64_fetch_xor_relaxed > # define atomic64_fetch_xor_relaxed atomic64_fetch_xor > # define atomic64_fetch_xor_acquire atomic64_fetch_xor > @@ -672,7 +656,22 @@ static inline int atomic_dec_if_positive(atomic_t *v) > # define atomic64_try_cmpxchg_release atomic64_try_cmpxchg > #endif > > -#ifndef atomic64_andnot > +#ifdef atomic64_andnot > + > +#ifndef atomic64_fetch_andnot_relaxed > +# define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot > +# define atomic64_fetch_andnot_acquire atomic64_fetch_andnot > +# define atomic64_fetch_andnot_release atomic64_fetch_andnot > +#else > +# ifndef atomic64_fetch_andnot > +# define atomic64_fetch_andnot(...) __atomic_op_fence(atomic64_fetch_andnot, __VA_ARGS__) > +# define atomic64_fetch_andnot_acquire(...) __atomic_op_acquire(atomic64_fetch_andnot, __VA_ARGS__) > +# define atomic64_fetch_andnot_release(...) __atomic_op_release(atomic64_fetch_andnot, __VA_ARGS__) > +# endif > +#endif > + > +#else /* !atomic64_andnot: */ > + > static inline void atomic64_andnot(long long i, atomic64_t *v) > { > atomic64_and(~i, v); > @@ -697,7 +696,8 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v > { > return atomic64_fetch_and_release(~i, v); > } > -#endif > + > +#endif /* !atomic64_andnot */ > > #define atomic64_cond_read_relaxed(v, c) smp_cond_load_relaxed(&(v)->counter, (c)) > #define atomic64_cond_read_acquire(v, c) smp_cond_load_acquire(&(v)->counter, (c))