Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp362175iog; Fri, 24 Jun 2022 05:37:38 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vTwh5YnVKLq7xja3uClSYMF+eB3FWqMSpzEwMfyESy2KLWozFyUTbX3gltqps0o/3z0cu+ X-Received: by 2002:a63:5262:0:b0:40d:1d1f:2f3d with SMTP id s34-20020a635262000000b0040d1d1f2f3dmr11768373pgl.4.1656074257847; Fri, 24 Jun 2022 05:37:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656074257; cv=none; d=google.com; s=arc-20160816; b=NEhEVKa+lgptnyFe1MnMd8yXnNcZs2n0VCNLMnwDfyK8ps82yPNRk8Hdqh9oFIdXgC D+bYt3qpVKYf9PoBSlsmxBaSMsJUTzrh2j3VBsndVB34fW2Uo3Pye9RYfai3hwNPVuGF 421m+EHmVOKprNrofci6GqsSFv8GGG6wn2mEUOrm9vaaW2YWbYZLnnV0zPaew2fgfNEO CZtaAg9ClAWtyXge5wzJ/tzTV4dkLaVqyBdJEvgwlZy1wLYPE1BIGqgLyV5D9r84EPcM nI9D7cPYO0xrU4tNNq2w1RJurlr7SFIpy0yewKTkxgmFhRqQ4XwjvniaGuxzXmI7/g/7 +nZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+lRGU76Bfizg4pdy4Y8/PUhss0gHSxHilnzqV8ZHXbs=; b=tLQVwi9ZvRKLcYxWvUeviVAfW7fG+O0ptuL2EcFJK5dc2AkS2pxSjDgren/I3fvh56 BfkUgALHidf/9jE8UVki2U8JZrNwmtIM4PeCkG4BZnwS0AvaoTJR45fGKZRE/rbZ2xAg qtPWn2G/ypV5BNacemcNmyDy4Mh7cS3B1tI3haV4TanYyKZU08Ot6SGu2OufsEsfGNkQ 5qXlUNDvtpG77z2SHuzB2g2LhrI9RVAqe+s4h2Jqz5KzkDVOBajc/2qd6P9dC/gjmQW7 sqwBQQpDVrGQruSr1U7LzX9Vy3ByYOR/22RiFLnf6+9H6eA7pGeUxzhwMUEMZFyMeVP0 iitg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="a/WpPaBh"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n9-20020a170903110900b001631214cc97si1450216plh.350.2022.06.24.05.37.25; Fri, 24 Jun 2022 05:37:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="a/WpPaBh"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232088AbiFXMN6 (ORCPT + 99 others); Fri, 24 Jun 2022 08:13:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231196AbiFXMN1 (ORCPT ); Fri, 24 Jun 2022 08:13:27 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A848015A23; Fri, 24 Jun 2022 05:13:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656072806; x=1687608806; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZQHyveP5q6lp5k1mhcTS33OAg28MjWuB+CMOaw02iy4=; b=a/WpPaBhNj22gn66Qf6wFaxkKDkVnAXX+hUoug1PXXn6kf80UBZnTAfF XVqnZqb7H8h2A27bmVmd40ZuAyk5zsjORjD5fjO1/VKEKxHQ8DmDd6qZo xwXsArzOjwvY23zIe92Wnz6m5btpR0bXad6HoFjUaK/tCQd1skHpSjvTu HrpukAdXd/vallUnmXMHJapNq+RvuBlswuKkbe/ITyNxIeJh7GvqDDLpP XniljduvZ4Gi7kxUz7YD7eDzy6RgOe9WlHaKvP5ZkjcvJDFQ340kBbVDN XNtl4qYjrzdN4c8keZcMVK9D5JvQ397K6WQ8vLXx38IB05lMOyUPPHjB7 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10387"; a="260804003" X-IronPort-AV: E=Sophos;i="5.92,218,1650956400"; d="scan'208";a="260804003" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2022 05:13:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,218,1650956400"; d="scan'208";a="645249167" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga008.fm.intel.com with ESMTP; 24 Jun 2022 05:13:20 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 25OCDEo2014999; Fri, 24 Jun 2022 13:13:18 +0100 From: Alexander Lobakin To: Arnd Bergmann , Yury Norov Cc: Alexander Lobakin , Andy Shevchenko , Mark Rutland , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Maciej Fijalkowski , Jesse Brandeburg , Greg Kroah-Hartman , Nathan Chancellor , Nick Desaulniers , Tom Rix , kernel test robot , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, llvm@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH v5 2/9] bitops: always define asm-generic non-atomic bitops Date: Fri, 24 Jun 2022 14:13:06 +0200 Message-Id: <20220624121313.2382500-3-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220624121313.2382500-1-alexandr.lobakin@intel.com> References: <20220624121313.2382500-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-5.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Move generic non-atomic bitops from the asm-generic header which gets included only when there are no architecture-specific alternatives, to a separate independent file to make them always available. Almost no actual code changes, only one comment added to generic_test_bit() saying that it's an atomic operation itself and thus `volatile` must always stay there with no cast-aways. Suggested-by: Andy Shevchenko # comment Suggested-by: Marco Elver # reference to kernel-doc Signed-off-by: Alexander Lobakin Reviewed-by: Andy Shevchenko Reviewed-by: Marco Elver --- .../asm-generic/bitops/generic-non-atomic.h | 130 ++++++++++++++++++ include/asm-generic/bitops/non-atomic.h | 110 ++------------- 2 files changed, 138 insertions(+), 102 deletions(-) create mode 100644 include/asm-generic/bitops/generic-non-atomic.h diff --git a/include/asm-generic/bitops/generic-non-atomic.h b/include/asm-generic/bitops/generic-non-atomic.h new file mode 100644 index 000000000000..7226488810e5 --- /dev/null +++ b/include/asm-generic/bitops/generic-non-atomic.h @@ -0,0 +1,130 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H +#define __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H + +#include + +#ifndef _LINUX_BITOPS_H +#error only can be included directly +#endif + +/* + * Generic definitions for bit operations, should not be used in regular code + * directly. + */ + +/** + * generic___set_bit - Set a bit in memory + * @nr: the bit to set + * @addr: the address to start counting from + * + * Unlike set_bit(), this function is non-atomic and may be reordered. + * If it's called on the same region of memory simultaneously, the effect + * may be that only one operation succeeds. + */ +static __always_inline void +generic___set_bit(unsigned int nr, volatile unsigned long *addr) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); + + *p |= mask; +} + +static __always_inline void +generic___clear_bit(unsigned int nr, volatile unsigned long *addr) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); + + *p &= ~mask; +} + +/** + * generic___change_bit - Toggle a bit in memory + * @nr: the bit to change + * @addr: the address to start counting from + * + * Unlike change_bit(), this function is non-atomic and may be reordered. + * If it's called on the same region of memory simultaneously, the effect + * may be that only one operation succeeds. + */ +static __always_inline +void generic___change_bit(unsigned int nr, volatile unsigned long *addr) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); + + *p ^= mask; +} + +/** + * generic___test_and_set_bit - Set a bit and return its old value + * @nr: Bit to set + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static __always_inline int +generic___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); + unsigned long old = *p; + + *p = old | mask; + return (old & mask) != 0; +} + +/** + * generic___test_and_clear_bit - Clear a bit and return its old value + * @nr: Bit to clear + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static __always_inline int +generic___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); + unsigned long old = *p; + + *p = old & ~mask; + return (old & mask) != 0; +} + +/* WARNING: non atomic and it can be reordered! */ +static __always_inline int +generic___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); + unsigned long old = *p; + + *p = old ^ mask; + return (old & mask) != 0; +} + +/** + * generic_test_bit - Determine whether a bit is set + * @nr: bit number to test + * @addr: Address to start counting from + */ +static __always_inline int +generic_test_bit(unsigned int nr, const volatile unsigned long *addr) +{ + /* + * Unlike the bitops with the '__' prefix above, this one *is* atomic, + * so `volatile` must always stay here with no cast-aways. See + * `Documentation/atomic_bitops.txt` for the details. + */ + return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); +} + +#endif /* __ASM_GENERIC_BITOPS_GENERIC_NON_ATOMIC_H */ diff --git a/include/asm-generic/bitops/non-atomic.h b/include/asm-generic/bitops/non-atomic.h index 078cc68be2f1..23d3abc1e10d 100644 --- a/include/asm-generic/bitops/non-atomic.h +++ b/include/asm-generic/bitops/non-atomic.h @@ -2,121 +2,27 @@ #ifndef _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ #define _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ -#include +#include -/** - * arch___set_bit - Set a bit in memory - * @nr: the bit to set - * @addr: the address to start counting from - * - * Unlike set_bit(), this function is non-atomic and may be reordered. - * If it's called on the same region of memory simultaneously, the effect - * may be that only one operation succeeds. - */ -static __always_inline void -arch___set_bit(unsigned int nr, volatile unsigned long *addr) -{ - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - - *p |= mask; -} +#define arch___set_bit generic___set_bit #define __set_bit arch___set_bit -static __always_inline void -arch___clear_bit(unsigned int nr, volatile unsigned long *addr) -{ - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - - *p &= ~mask; -} +#define arch___clear_bit generic___clear_bit #define __clear_bit arch___clear_bit -/** - * arch___change_bit - Toggle a bit in memory - * @nr: the bit to change - * @addr: the address to start counting from - * - * Unlike change_bit(), this function is non-atomic and may be reordered. - * If it's called on the same region of memory simultaneously, the effect - * may be that only one operation succeeds. - */ -static __always_inline -void arch___change_bit(unsigned int nr, volatile unsigned long *addr) -{ - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - - *p ^= mask; -} +#define arch___change_bit generic___change_bit #define __change_bit arch___change_bit -/** - * arch___test_and_set_bit - Set a bit and return its old value - * @nr: Bit to set - * @addr: Address to count from - * - * This operation is non-atomic and can be reordered. - * If two examples of this operation race, one can appear to succeed - * but actually fail. You must protect multiple accesses with a lock. - */ -static __always_inline int -arch___test_and_set_bit(unsigned int nr, volatile unsigned long *addr) -{ - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long old = *p; - - *p = old | mask; - return (old & mask) != 0; -} +#define arch___test_and_set_bit generic___test_and_set_bit #define __test_and_set_bit arch___test_and_set_bit -/** - * arch___test_and_clear_bit - Clear a bit and return its old value - * @nr: Bit to clear - * @addr: Address to count from - * - * This operation is non-atomic and can be reordered. - * If two examples of this operation race, one can appear to succeed - * but actually fail. You must protect multiple accesses with a lock. - */ -static __always_inline int -arch___test_and_clear_bit(unsigned int nr, volatile unsigned long *addr) -{ - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long old = *p; - - *p = old & ~mask; - return (old & mask) != 0; -} +#define arch___test_and_clear_bit generic___test_and_clear_bit #define __test_and_clear_bit arch___test_and_clear_bit -/* WARNING: non atomic and it can be reordered! */ -static __always_inline int -arch___test_and_change_bit(unsigned int nr, volatile unsigned long *addr) -{ - unsigned long mask = BIT_MASK(nr); - unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr); - unsigned long old = *p; - - *p = old ^ mask; - return (old & mask) != 0; -} +#define arch___test_and_change_bit generic___test_and_change_bit #define __test_and_change_bit arch___test_and_change_bit -/** - * arch_test_bit - Determine whether a bit is set - * @nr: bit number to test - * @addr: Address to start counting from - */ -static __always_inline int -arch_test_bit(unsigned int nr, const volatile unsigned long *addr) -{ - return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1))); -} +#define arch_test_bit generic_test_bit #define test_bit arch_test_bit #endif /* _ASM_GENERIC_BITOPS_NON_ATOMIC_H_ */ -- 2.36.1