Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp402969iof; Mon, 6 Jun 2022 05:49:15 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzVm0knDB+w3MH8xtX/6aZ85HYmuoIntzZFUK+VorVpaqA5APAMUhoC9fiHux9LHYTCbqn4 X-Received: by 2002:a17:90b:1a88:b0:1dc:8e84:9133 with SMTP id ng8-20020a17090b1a8800b001dc8e849133mr26333934pjb.231.1654519755734; Mon, 06 Jun 2022 05:49:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654519755; cv=none; d=google.com; s=arc-20160816; b=bnEAtnx/PFZz2UO6MhTBA3WAzFE0zqwWlxUX/EA+pbqk+6H6ukELIi7+I47ZQG46lF utRFCXSiU5L4GNq4QdMFvkczFNnCwDMq5Y2uUWDzb+XA5G+n+lkO1AMjrR2Yv+uEYbDA awuoNQ0brjq3uQ4AbakfSnN4NOOOz8xorsST7sX5Iwgi3PrjX3gHNSe6/+SDjQSodeLH XTcjgWUMQFKtYUDnC2rPrrNboHmwe08jA4ncI4biXqIb+hs99yBRUqtvkZitjN635YLu WZK4Lv1Qdv/Nw2FtWtr95jjOzpCBdc2fRUD7XIRUwqaveTQU9EU74LA2p+yn0u4G1sJH JGkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=emUlto50Ojg0m+2qqi1Mat/146sRQNTEvYK3ABvMHNk=; b=jFB++IVez7KTkbude7lq+RrWlwzVkUUCawUzSjiG3IVbhNb9OoyZs9K89F7CcqDvJE oPMS7Eij/zS6CQQjRk3/6tZ+0sOsO7wp5vhllm7qdqcQ/CNBjGutH55QeFBhCVRzle0p A9LbWdZJXmF+7fPpsulwoYgtjR/46u3peTk3s6G1xqn/qnP3+utPDefMaWciKxGqS5wD ad8qog5M2luB0Xk/vzCW+6UeRtZLiZrsSmJj+65aM2Y/Nd+HGpW5+JPTrhCDgmAfQkgC pxPESzrAgxcrK5WLORDx7gbW8n/ZkMIwXys42qo+SaOqwA8rru0aCdnrPtbymDh1Q1Wi 1eCA== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id k20-20020aa790d4000000b005193e447126si18833126pfk.41.2022.06.06.05.49.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Jun 2022 05:49:15 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5A2D5106A66; Mon, 6 Jun 2022 05:45:19 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237735AbiFFMpK (ORCPT + 99 others); Mon, 6 Jun 2022 08:45:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237695AbiFFMpD (ORCPT ); Mon, 6 Jun 2022 08:45:03 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2D0E0A7E3D; Mon, 6 Jun 2022 05:45:02 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AD3361042; Mon, 6 Jun 2022 05:45:01 -0700 (PDT) Received: from FVFF77S0Q05N (FVFF77S0Q05N.cambridge.arm.com [10.1.36.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 81E963F73B; Mon, 6 Jun 2022 05:44:58 -0700 (PDT) Date: Mon, 6 Jun 2022 13:44:52 +0100 From: Mark Rutland To: Alexander Lobakin Cc: Arnd Bergmann , Yury Norov , Andy Shevchenko , Richard Henderson , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/6] bitops: always define asm-generic non-atomic bitops Message-ID: References: <20220606114908.962562-1-alexandr.lobakin@intel.com> <20220606114908.962562-3-alexandr.lobakin@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220606114908.962562-3-alexandr.lobakin@intel.com> X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 06, 2022 at 01:49:03PM +0200, Alexander Lobakin wrote: > Move generic non-atomic bitops from the asm-generic header which > gets included only when there are no architecture-specific > alternatives, to a separate independent file to make them always > available. > > Signed-off-by: Alexander Lobakin > --- > .../asm-generic/bitops/generic-non-atomic.h | 124 ++++++++++++++++++ > include/asm-generic/bitops/non-atomic.h | 109 ++------------- > 2 files changed, 132 insertions(+), 101 deletions(-) > create mode 100644 include/asm-generic/bitops/generic-non-atomic.h > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > new file mode 100644 > index 000000000000..7a60adfa6e7d > --- /dev/null > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -0,0 +1,124 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +#ifndef __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H > +#define __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H > + > +#include > + > +#ifndef _LINUX_BITOPS_H > +#error only can be included directly > +#endif > + > +/* > + * Generic definitions for bit operations, should not be used in regular code > + * directly. > + */ > + > +/** > + * gen___set_bit - Set a bit in memory > + * @nr: the bit to set > + * @addr: the address to start counting from > + * > + * Unlike set_bit(), this function is non-atomic and may be reordered. > + * If it's called on the same region of memory simultaneously, the effect > + * may be that only one operation succeeds. > + */ > +static __always_inline void > +gen___set_bit(unsigned int nr, volatile unsigned long *addr) Could we please use 'generic' rather than 'gen' as the prefix? That'd match what we did for the generic atomic_*() and atomic64_*() functions in commits * f8b6455a9d381fc5 ("locking/atomic: atomic: support ARCH_ATOMIC") * 1bdadf46eff6804a ("locking/atomic: atomic64: support ARCH_ATOMIC") ... and it avoids any potential confusion with 'gen' meaning 'generated' or similar. Thanks, Mark. > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p |= mask; > +} > + > +static __always_inline void > +gen___clear_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p &= ~mask; > +} > + > +/** > + * gen___change_bit - Toggle a bit in memory > + * @nr: the bit to change > + * @addr: the address to start counting from > + * > + * Unlike change_bit(), this function is non-atomic and may be reordered. > + * If it's called on the same region of memory simultaneously, the effect > + * may be that only one operation succeeds. > + */ > +static __always_inline > +void gen___change_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p ^= mask; > +} > + > +/** > + * gen___test_and_set_bit - Set a bit and return its old value > + * @nr: Bit to set > + * @addr: Address to count from > + * > + * This operation is non-atomic and can be reordered. > + * If two examples of this operation race, one can appear to succeed > + * but actually fail. You must protect multiple accesses with a lock. > + */ > +static __always_inline int > +gen___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old | mask; > + return (old & mask) != 0; > +} > + > +/** > + * gen___test_and_clear_bit - Clear a bit and return its old value > + * @nr: Bit to clear > + * @addr: Address to count from > + * > + * This operation is non-atomic and can be reordered. > + * If two examples of this operation race, one can appear to succeed > + * but actually fail. You must protect multiple accesses with a lock. > + */ > +static __always_inline int > +gen___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old & ~mask; > + return (old & mask) != 0; > +} > + > +/* WARNING: non atomic and it can be reordered! */ > +static __always_inline int > +gen___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old ^ mask; > + return (old & mask) != 0; > +} > + > +/** > + * gen_test_bit - Determine whether a bit is set > + * @nr: bit number to test > + * @addr: Address to start counting from > + */ > +static __always_inline int > +gen_test_bit(unsigned int nr, const volatile unsigned long *addr) > +{ > + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > +} > + > +#endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ > diff --git a/include/asm-generic/bitops/non-atomic.h b/include/asm-generic/bitops/non-atomic.h > index 078cc68be2f1..7ce0ed22fb5f 100644 > --- a/include/asm-generic/bitops/non-atomic.h > +++ b/include/asm-generic/bitops/non-atomic.h > @@ -2,121 +2,28 @@ > #ifndef _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ > #define _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ > > +#include > #include > > -/** > - * arch___set_bit - Set a bit in memory > - * @nr: the bit to set > - * @addr: the address to start counting from > - * > - * Unlike set_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline void > -arch___set_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p |= mask; > -} > +#define arch___set_bit gen___set_bit > #define __set_bit arch___set_bit > > -static __always_inline void > -arch___clear_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p &= ~mask; > -} > +#define arch___clear_bit gen___clear_bit > #define __clear_bit arch___clear_bit > > -/** > - * arch___change_bit - Toggle a bit in memory > - * @nr: the bit to change > - * @addr: the address to start counting from > - * > - * Unlike change_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline > -void arch___change_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p ^= mask; > -} > +#define arch___change_bit gen___change_bit > #define __change_bit arch___change_bit > > -/** > - * arch___test_and_set_bit - Set a bit and return its old value > - * @nr: Bit to set > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - */ > -static __always_inline int > -arch___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old | mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_set_bit gen___test_and_set_bit > #define __test_and_set_bit arch___test_and_set_bit > > -/** > - * arch___test_and_clear_bit - Clear a bit and return its old value > - * @nr: Bit to clear > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - */ > -static __always_inline int > -arch___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old & ~mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_clear_bit gen___test_and_clear_bit > #define __test_and_clear_bit arch___test_and_clear_bit > > -/* WARNING: non atomic and it can be reordered! */ > -static __always_inline int > -arch___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old ^ mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_change_bit gen___test_and_change_bit > #define __test_and_change_bit arch___test_and_change_bit > > -/** > - * arch_test_bit - Determine whether a bit is set > - * @nr: bit number to test > - * @addr: Address to start counting from > - */ > -static __always_inline int > -arch_test_bit(unsigned int nr, const volatile unsigned long *addr) > -{ > - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > -} > +#define arch_test_bit gen_test_bit > #define test_bit arch_test_bit > > #endif /* _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ */ > -- > 2.36.1 >