Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp2703269iog; Mon, 20 Jun 2022 02:54:17 -0700 (PDT) X-Google-Smtp-Source: AGRyM1u6QB1+xQD5RV3a06GfOTo60/eaRlw7B0tFMFH09rEViRKqd/VMcRy4pDHFoXvcLLNZkd1e X-Received: by 2002:a17:906:147:b0:711:fd7e:7d6 with SMTP id 7-20020a170906014700b00711fd7e07d6mr20339700ejh.389.1655718856955; Mon, 20 Jun 2022 02:54:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1655718856; cv=none; d=google.com; s=arc-20160816; b=SUaXLRF90tB1hrvbVK8uheRYWAKHduT4Z/Q0aokvjwAf9fi9PUV+Bdm5BJGb7bDBYK WLIOiYC0jVIQOXkToUpwaFbhAJNiFFf9GkKhPL73m/yRZ14Kfl20XATzhO+1tCdmxzXp O/SGbgDRDe0HmVm38fG+TUlVrmNByQvArVc+HL5zpEBbdZrVm2UT9SV45Eu1bgmfhXAq SeAKlaqyE04ZBGUxNm6SZZiknwolilOhEqiPIFnGRlyN+Jl9T3SDmbwOwuUznggs9nJ4 Phj0om9TtmbOQcrXCMpQ8f+gRXg7YeNXbtgdYcur+0bma9wLkq+DNEjPdWzdIDJ4LclD 2oWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=InWhdMf1/1em6G6O59SWmdHx+HjozBGTG/fUpLSL4zs=; b=CPiT8ml0unPVjqaF7JLFjvZ4DQ+Agz/qICmaUAthp2TUr2I9hjczH4wdqlSEkccRXW Lwjw0PdCPYPLRcRLYgMqIFVtsy+qISqEpJlojan7t72LtlpqmQWcHBhUa75w/qgSrHa7 zu6ZUEc7NgTaJHZfvzCHTAMWslnB/O9a8ARVcA7JCL0iton+WNuDaxNgBYXodGyJiQnw H95eX9aqGuN4lhTbUOI8LUJfNbIO2UhAeHFQhXBR0YIot/s9bEYPLT/To1gAr+3YN0S7 hUvCTzW5O9ixT0QrdiDADXl4A5IFLw1EoPmmm5T6wWbIKgrJkN5LrGqU2HiSpjiZnKdK IWzQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=L8P3UDB0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j4-20020a170906474400b00703e424d757si8306957ejs.400.2022.06.20.02.53.51; Mon, 20 Jun 2022 02:54:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=L8P3UDB0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239506AbiFTJuI (ORCPT + 99 others); Mon, 20 Jun 2022 05:50:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239620AbiFTJtt (ORCPT ); Mon, 20 Jun 2022 05:49:49 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8419913E04 for ; Mon, 20 Jun 2022 02:49:47 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id u9so7845340ybq.3 for ; Mon, 20 Jun 2022 02:49:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=InWhdMf1/1em6G6O59SWmdHx+HjozBGTG/fUpLSL4zs=; b=L8P3UDB0r+kAWE8ATh497BZTGQg81XYY0qj0SbphWTu5GommNgfmi9tnHEGwhe2ecS UsCbkZ8Rn8SpfKxA4li41PVrwHH4gH7IOZAm4EizS/AOv5qTHrOozfVoWa0Tf4ge3ytO b4tHyimm4DVyec3OSdbeITsYTvzITBkxCXTCiwJSzIGVi9V248iEt1owGPupRTb9fx94 WOTdxRWqtzf77IJWFApiDAt8U1P/RLa6aG5CmxmmTS281Oo288QRiFKr5FjTlSD4NFzI u0wfm8eV8DtoAK1YYJlreAQK2NKHP/VH1Bx2dH7j4QDbz1MpNP14DRsDhp8zdyrbfNu/ Tymg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=InWhdMf1/1em6G6O59SWmdHx+HjozBGTG/fUpLSL4zs=; b=5lIbCxL5Tp2jRUhMmm58yyDopDK/EX/yYnNSJGCzrzqi1sB2IT7aZHT/NpxKs8Vv87 6YZYjqJ/bUN5MNcz5vVmJ8mZrBuPLpLqcS5DutcY+h/mWtckrV5CGP3X/5shfR2xA+jY UAVnUlPrhGOew82vDGFjw4RaI+EZcoEdec2KxOGLi4Dhc49eN5aF17WhhsT6f7iArE2B vqIS+RPDA1HDQsGj10YDcr0WICHi1dkFPDVd0NUEilfITaJ7BOejmE7jffvT/w84cw7B +6OmxtOifkEtG5s7kPGHhGkcbNl4Ki+i3T4aAVMkyv6uXSaaM9KLGbjPsh2mpVLga/0A VvuA== X-Gm-Message-State: AJIora9sg+FMTaRQLSQBr2nNlZ9mmMJdOc1e+TUiHDuZ9nUrvtolgRsd P74oh+HlEXbCl1Lb3gM8ccVOOMV9T3ir+FoqOQPUqA== X-Received: by 2002:a25:3242:0:b0:668:ce6d:b820 with SMTP id y63-20020a253242000000b00668ce6db820mr12388945yby.87.1655718586439; Mon, 20 Jun 2022 02:49:46 -0700 (PDT) MIME-Version: 1.0 References: <20220617144031.2549432-1-alexandr.lobakin@intel.com> <20220617144031.2549432-3-alexandr.lobakin@intel.com> In-Reply-To: <20220617144031.2549432-3-alexandr.lobakin@intel.com> From: Marco Elver Date: Mon, 20 Jun 2022 11:49:10 +0200 Message-ID: Subject: Re: [PATCH v3 2/7] bitops: always define asm-generic non-atomic bitops To: Alexander Lobakin Cc: Arnd Bergmann , Yury Norov , Andy Shevchenko , Mark Rutland , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Borislav Petkov , Tony Luck , Maciej Fijalkowski , Jesse Brandeburg , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 17 Jun 2022 at 19:19, Alexander Lobakin wrote: > > Move generic non-atomic bitops from the asm-generic header which > gets included only when there are no architecture-specific > alternatives, to a separate independent file to make them always > available. > Almost no actual code changes, only one comment added to > generic_test_bit() saying that it's an atomic operation itself > and thus `volatile` must always stay there with no cast-aways. > > Suggested-by: Andy Shevchenko # comment > Suggested-by: Marco Elver # reference to kernel-doc > Signed-off-by: Alexander Lobakin > Reviewed-by: Andy Shevchenko Reviewed-by: Marco Elver > --- > .../asm-generic/bitops/generic-non-atomic.h | 130 ++++++++++++++++++ > include/asm-generic/bitops/non-atomic.h | 110 ++------------- > 2 files changed, 138 insertions(+), 102 deletions(-) > create mode 100644 include/asm-generic/bitops/generic-non-atomic.h > > diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h > new file mode 100644 > index 000000000000..7226488810e5 > --- /dev/null > +++ b/include/asm-generic/bitops/generic-non-atomic.h > @@ -0,0 +1,130 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > + > +#ifndef __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H > +#define __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H > + > +#include > + > +#ifndef _LINUX_BITOPS_H > +#error only can be included directly > +#endif > + > +/* > + * Generic definitions for bit operations, should not be used in regular code > + * directly. > + */ > + > +/** > + * generic___set_bit - Set a bit in memory > + * @nr: the bit to set > + * @addr: the address to start counting from > + * > + * Unlike set_bit(), this function is non-atomic and may be reordered. > + * If it's called on the same region of memory simultaneously, the effect > + * may be that only one operation succeeds. > + */ > +static __always_inline void > +generic___set_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p |= mask; > +} > + > +static __always_inline void > +generic___clear_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p &= ~mask; > +} > + > +/** > + * generic___change_bit - Toggle a bit in memory > + * @nr: the bit to change > + * @addr: the address to start counting from > + * > + * Unlike change_bit(), this function is non-atomic and may be reordered. > + * If it's called on the same region of memory simultaneously, the effect > + * may be that only one operation succeeds. > + */ > +static __always_inline > +void generic___change_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + > + *p ^= mask; > +} > + > +/** > + * generic___test_and_set_bit - Set a bit and return its old value > + * @nr: Bit to set > + * @addr: Address to count from > + * > + * This operation is non-atomic and can be reordered. > + * If two examples of this operation race, one can appear to succeed > + * but actually fail. You must protect multiple accesses with a lock. > + */ > +static __always_inline int > +generic___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old | mask; > + return (old & mask) != 0; > +} > + > +/** > + * generic___test_and_clear_bit - Clear a bit and return its old value > + * @nr: Bit to clear > + * @addr: Address to count from > + * > + * This operation is non-atomic and can be reordered. > + * If two examples of this operation race, one can appear to succeed > + * but actually fail. You must protect multiple accesses with a lock. > + */ > +static __always_inline int > +generic___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old & ~mask; > + return (old & mask) != 0; > +} > + > +/* WARNING: non atomic and it can be reordered! */ > +static __always_inline int > +generic___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > +{ > + unsigned long mask = BIT_MASK(nr); > + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > + unsigned long old = *p; > + > + *p = old ^ mask; > + return (old & mask) != 0; > +} > + > +/** > + * generic_test_bit - Determine whether a bit is set > + * @nr: bit number to test > + * @addr: Address to start counting from > + */ > +static __always_inline int > +generic_test_bit(unsigned int nr, const volatile unsigned long *addr) > +{ > + /* > + * Unlike the bitops with the '__' prefix above, this one *is* atomic, > + * so `volatile` must always stay here with no cast-aways. See > + * `Documentation/atomic_bitops.txt` for the details. > + */ > + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > +} > + > +#endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ > diff --git a/include/asm-generic/bitops/non-atomic.h b/include/asm-generic/bitops/non-atomic.h > index 078cc68be2f1..23d3abc1e10d 100644 > --- a/include/asm-generic/bitops/non-atomic.h > +++ b/include/asm-generic/bitops/non-atomic.h > @@ -2,121 +2,27 @@ > #ifndef _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ > #define _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ > > -#include > +#include > > -/** > - * arch___set_bit - Set a bit in memory > - * @nr: the bit to set > - * @addr: the address to start counting from > - * > - * Unlike set_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline void > -arch___set_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p |= mask; > -} > +#define arch___set_bit generic___set_bit > #define __set_bit arch___set_bit > > -static __always_inline void > -arch___clear_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p &= ~mask; > -} > +#define arch___clear_bit generic___clear_bit > #define __clear_bit arch___clear_bit > > -/** > - * arch___change_bit - Toggle a bit in memory > - * @nr: the bit to change > - * @addr: the address to start counting from > - * > - * Unlike change_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline > -void arch___change_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - > - *p ^= mask; > -} > +#define arch___change_bit generic___change_bit > #define __change_bit arch___change_bit > > -/** > - * arch___test_and_set_bit - Set a bit and return its old value > - * @nr: Bit to set > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - */ > -static __always_inline int > -arch___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old | mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_set_bit generic___test_and_set_bit > #define __test_and_set_bit arch___test_and_set_bit > > -/** > - * arch___test_and_clear_bit - Clear a bit and return its old value > - * @nr: Bit to clear > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - */ > -static __always_inline int > -arch___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old & ~mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_clear_bit generic___test_and_clear_bit > #define __test_and_clear_bit arch___test_and_clear_bit > > -/* WARNING: non atomic and it can be reordered! */ > -static __always_inline int > -arch___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) > -{ > - unsigned long mask = BIT_MASK(nr); > - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); > - unsigned long old = *p; > - > - *p = old ^ mask; > - return (old & mask) != 0; > -} > +#define arch___test_and_change_bit generic___test_and_change_bit > #define __test_and_change_bit arch___test_and_change_bit > > -/** > - * arch_test_bit - Determine whether a bit is set > - * @nr: bit number to test > - * @addr: Address to start counting from > - */ > -static __always_inline int > -arch_test_bit(unsigned int nr, const volatile unsigned long *addr) > -{ > - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); > -} > +#define arch_test_bit generic_test_bit > #define test_bit arch_test_bit > > #endif /* _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ */ > -- > 2.36.1 >