Received: by 2002:a5d:925a:0:0:0:0:0 with SMTP id e26csp1409298iol; Fri, 10 Jun 2022 07:02:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzWmNJn0IdDvqZVyrqbqTDUOC691TnFIkkNNRM70zNErXXd1eZLEvzbxqYjyucMRGwbJ9IE X-Received: by 2002:a63:5d19:0:b0:3ff:124a:e8e9 with SMTP id r25-20020a635d19000000b003ff124ae8e9mr9081323pgb.218.1654869724341; Fri, 10 Jun 2022 07:02:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654869724; cv=none; d=google.com; s=arc-20160816; b=RRXb7bUQhVB49SzCnXOYwZ4SI3l3fI6Q9wojAUbnWb+Ryw4GOCSuSyyjIhMOK6Rk77 EDUzM/f2r8+HaHdBZ2AirPbqVnlBORD/NKOcX2SyFYVM+sKR2BDj5zI/UPN3JWUjL973 cD6AqgHkOBDJHbwucxFYV+pOX3VFWGPCOb1l9netQECmigFgwmAsPhNSyVuTQIbWebp6 JEvqVHNhVjEyL7TE8cnRRIUy5Avd4F3nzKoIdlo+RBhDekTsykxzoQNAzj8AKWjl9AiW ngX/esitiJYCEFQit0UDKO1OLr6TOlfeExV4ofQTO9JIGbfeBo3IeX4TonP7tEnCgp4+ Zrxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:organization:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=6jW6m0T7T3UhThFUEGtlDm6jflCgqQhUoWHVs/xGs4k=; b=FS0D7hYRV2pOFn+R/OTdXSg9HEb7B9btD7e1l/tWDDG9sZkOe6d+28fY1jstB6R31l aObRWcUdY6u5liKHqJe3hjUbkLct2LGsIV/kyOwbtQi9CWr+ud+HzXwS0qMS/QIqMQuj GvUfYZfad/Vs7MniGtq9NHDLbN7/MMuxBgB/hb0TTYZzZK4I2kz7CmJIEKBCYsUVaZwG TaJF+gN8r1wCv/hsVKvqOtdAQpUNty6fTDc6jSMsrEYfEKKv5eyaE0Tnyzj/3kBCXEy1 ZwcG2VbLFUOIyhB7Q4PiUmE+I2NsX/nRGTswxHzYmODky/YDT6VfH7cc8bDK82XLSgDb yRaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dLB9qgoj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h28-20020aa79f5c000000b004fa3a8e005dsi34488360pfr.276.2022.06.10.07.01.48; Fri, 10 Jun 2022 07:02:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dLB9qgoj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345390AbiFJNvH (ORCPT + 99 others); Fri, 10 Jun 2022 09:51:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344340AbiFJNvE (ORCPT ); Fri, 10 Jun 2022 09:51:04 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67E503BA41; Fri, 10 Jun 2022 06:51:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654869062; x=1686405062; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=NjRi64oZzw3bIR6cpALKiyGfu+UUTmbnv3awOawt6NI=; b=dLB9qgojpcN1KNAJTsgWL9M8T6R4e9mw9Hdx+8lG1yABBOoYfBfWAukZ eG0c/qyY/kzI9pDCWZnH02FR5pmDoo5H3zHoR9vh/ViTLskCPk4QznLuS HYr7PPvuryUMIo0iKW/UE+WDgU90XgYoQr7qkLphRHO1xtyH5PRIQT5Cy ImUzdgwVvkLsVa0g8Dt6CBPBtwoQWxneSlmlOkqr0FzY0ND0L5I72CEVd dkpTg2T7zPJYB/IgmXS+bCOYcWmQYMptn7BU5KcD48bh6Uu6qRR2KznIJ Jkxa3bfdn2uHu2xPV3b0meUUGLiFivo/RR07WB9vkItaTmKmbLXD/eidj g==; X-IronPort-AV: E=McAfee;i="6400,9594,10373"; a="278433706" X-IronPort-AV: E=Sophos;i="5.91,290,1647327600"; d="scan'208";a="278433706" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jun 2022 06:51:01 -0700 X-IronPort-AV: E=Sophos;i="5.91,290,1647327600"; d="scan'208";a="760515774" Received: from smile.fi.intel.com ([10.237.72.54]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jun 2022 06:50:56 -0700 Received: from andy by smile.fi.intel.com with local (Exim 4.95) (envelope-from ) id 1nzf2K-000YpO-5k; Fri, 10 Jun 2022 16:50:52 +0300 Date: Fri, 10 Jun 2022 16:50:51 +0300 From: Andy Shevchenko To: Alexander Lobakin Cc: Arnd Bergmann , Yury Norov , Mark Rutland , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/6] bitops: always define asm-generic non-atomic bitops Message-ID: References: <20220610113427.908751-1-alexandr.lobakin@intel.com> <20220610113427.908751-3-alexandr.lobakin@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220610113427.908751-3-alexandr.lobakin@intel.com> Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 10, 2022 at 01:34:23PM +0200, Alexander Lobakin wrote: > Move generic non-atomic bitops from the asm-generic header which > gets included only when there are no architecture-specific > alternatives, to a separate independent file to make them always > available. Reviewed-by: Andy Shevchenko But see below. > Signed-off-by: Alexander Lobakin > --- > .../asm-generic/bitops/generic-non-atomic.h | 124 ++++++++++++++++++ > include/asm-generic/bitops/non-atomic.h | 109 ++------------- > 2 files changed, 132 insertions(+), 101 deletions(-) > create mode 100644 include/asm-generic/bitops/generic-non-atomic.h > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > new file mode 100644 > index 000000000000..808bc4469886 > --- /dev/null > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -0,0 +1,124 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +#ifndef __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H > +#define __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H > + > +#include > + > +#ifndef _LINUX_BITOPS_H > +#error only can be included directly > +#endif > + > +/* > + * Generic definitions for bit operations, should not be used in regular code > + * directly. > + */ > + > +/** > + * generic___set_bit - Set a bit in memory > + * @nr: the bit to set > + * @addr: the address to start counting from > + * > + * Unlike set_bit(), this function is non-atomic and may be reordered. > + * If it's called on the same region of memory simultaneously, the effect > + * may be that only one operation succeeds. > + */ > +static __always_inline void > +generic___set_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p |= mask; > +} > + > +static __always_inline void > +generic___clear_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p &= ~mask; > +} > + > +/** > + * generic___change_bit - Toggle a bit in memory > + * @nr: the bit to change > + * @addr: the address to start counting from > + * > + * Unlike change_bit(), this function is non-atomic and may be reordered. > + * If it's called on the same region of memory simultaneously, the effect > + * may be that only one operation succeeds. > + */ > +static __always_inline > +void generic___change_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p ^= mask; > +} > + > +/** > + * generic___test_and_set_bit - Set a bit and return its old value > + * @nr: Bit to set > + * @addr: Address to count from > + * > + * This operation is non-atomic and can be reordered. > + * If two examples of this operation race, one can appear to succeed > + * but actually fail. You must protect multiple accesses with a lock. > + */ > +static __always_inline int > +generic___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old | mask; > + return (old & mask) != 0; > +} > + > +/** > + * generic___test_and_clear_bit - Clear a bit and return its old value > + * @nr: Bit to clear > + * @addr: Address to count from > + * > + * This operation is non-atomic and can be reordered. > + * If two examples of this operation race, one can appear to succeed > + * but actually fail. You must protect multiple accesses with a lock. > + */ > +static __always_inline int > +generic___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old & ~mask; > + return (old & mask) != 0; > +} > + > +/* WARNING: non atomic and it can be reordered! */ > +static __always_inline int > +generic___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old ^ mask; > + return (old & mask) != 0; > +} > + > +/** > + * generic_test_bit - Determine whether a bit is set > + * @nr: bit number to test > + * @addr: Address to start counting from > + */ Shouldn't we add in this or in separate patch a big NOTE to explain that this is actually atomic and must be kept as a such? > +static __always_inline int > +generic_test_bit(unsigned int nr, const volatile unsigned long *addr) > +{ > + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > +} > + > +#endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ > diff --git a/include/asm-generic/bitops/non-atomic.h b/include/asm-generic/bitops/non-atomic.h > index 078cc68be2f1..a05bc090a6a3 100644 > --- a/include/asm-generic/bitops/non-atomic.h > +++ b/include/asm-generic/bitops/non-atomic.h > @@ -2,121 +2,28 @@ > #ifndef _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ > #define _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ > > +#include > #include > > -/** > - * arch___set_bit - Set a bit in memory > - * @nr: the bit to set > - * @addr: the address to start counting from > - * > - * Unlike set_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline void > -arch___set_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p |= mask; > -} > +#define arch___set_bit generic___set_bit > #define __set_bit arch___set_bit > > -static __always_inline void > -arch___clear_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p &= ~mask; > -} > +#define arch___clear_bit generic___clear_bit > #define __clear_bit arch___clear_bit > > -/** > - * arch___change_bit - Toggle a bit in memory > - * @nr: the bit to change > - * @addr: the address to start counting from > - * > - * Unlike change_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline > -void arch___change_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p ^= mask; > -} > +#define arch___change_bit generic___change_bit > #define __change_bit arch___change_bit > > -/** > - * arch___test_and_set_bit - Set a bit and return its old value > - * @nr: Bit to set > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - */ > -static __always_inline int > -arch___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old | mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_set_bit generic___test_and_set_bit > #define __test_and_set_bit arch___test_and_set_bit > > -/** > - * arch___test_and_clear_bit - Clear a bit and return its old value > - * @nr: Bit to clear > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - */ > -static __always_inline int > -arch___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old & ~mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_clear_bit generic___test_and_clear_bit > #define __test_and_clear_bit arch___test_and_clear_bit > > -/* WARNING: non atomic and it can be reordered! */ > -static __always_inline int > -arch___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old ^ mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_change_bit generic___test_and_change_bit > #define __test_and_change_bit arch___test_and_change_bit > > -/** > - * arch_test_bit - Determine whether a bit is set > - * @nr: bit number to test > - * @addr: Address to start counting from > - */ > -static __always_inline int > -arch_test_bit(unsigned int nr, const volatile unsigned long *addr) > -{ > - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > -} > +#define arch_test_bit generic_test_bit > #define test_bit arch_test_bit > > #endif /* _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ */ > -- > 2.36.1 > -- With Best Regards, Andy Shevchenko