Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp797395ybi; Fri, 31 May 2019 09:03:29 -0700 (PDT) X-Google-Smtp-Source: APXvYqzSjoMLj0DvFWr5Eee9EK3Zb5VHOm2hHPtB+BFaZNq+3+9i/xOxteBW5eWiNKEpAzHmPYu8 X-Received: by 2002:a63:5050:: with SMTP id q16mr10427326pgl.12.1559318609288; Fri, 31 May 2019 09:03:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559318609; cv=none; d=google.com; s=arc-20160816; b=q81j0QbZAaIAWgUQa3bAbV8c59rA/ctS00Mu59e+vGhP05F6ZNbVtCHla/fhx5i931 dMQHj42+5D3Y04uWdDtlW2yC9A2W5AAJQGJu9d0MG+gixZQsVHWsWAsDKghYboWIts1U HXGgAgbNWi3sedSOZ3j6dDt5OE8+ga3LQPCYgiPYfWvvRe51c4anaL6xBfsbm5vh6iqS sGqh3VKJfoX0KeO0Ds6Qks8uOJZF65CKSkbkaOFvqpdHRIX3Kno8R8Y/xMvu4ey2IWal UjkoB0v0L6efeGzyrhB5jbCqMS09eWnDPqbaHDIpsL18xNRvkW2mPuEcV//t7wWOH2oX KntA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=skytbQ/MVhTdetrS2HTyvoSCCCLFsq65KIyTUxPtE+Q=; b=U2EdnGR8nJw93eaDGyAx29iLbTw1XBcNHcTwOkNyoFrz/SdQGigOeVY6em86ro6xk8 /Yn2jg1VueEIvuzn3exOiRqCraHja4QMbkTvr01WfcwK3ucqRh9qcrVscBz8LGpUf85l TNowpYChpdcY372feobVwAs9iPm7zlyb0jOjS7ARylE0A0Nyd+ksWoEcpPnoge+q6cZe cKolHGi7w9q3/vnh7rLzCJEcCJ2weu6qWaQkWu0aUmQ0E/ziCjrDF+7ULBYlQ77mdY2l cgl0rY53h/oCXjaWctDj96iLlvb+B6x1bNWmQmgRenG8yCmoFvF0QeynY8o455wX51ju tlMw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z127si6537443pgb.71.2019.05.31.09.03.10; Fri, 31 May 2019 09:03:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726799AbfEaQB2 (ORCPT + 99 others); Fri, 31 May 2019 12:01:28 -0400 Received: from foss.arm.com ([217.140.101.70]:53746 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726037AbfEaQB1 (ORCPT ); Fri, 31 May 2019 12:01:27 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7E82341; Fri, 31 May 2019 09:01:26 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CF90E3F59C; Fri, 31 May 2019 09:01:23 -0700 (PDT) Date: Fri, 31 May 2019 17:01:21 +0100 From: Mark Rutland To: Marco Elver Cc: peterz@infradead.org, aryabinin@virtuozzo.com, dvyukov@google.com, glider@google.com, andreyknvl@google.com, hpa@zytor.com, corbet@lwn.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, arnd@arndb.de, jpoimboe@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, kasan-dev@googlegroups.com Subject: Re: [PATCH v3 3/3] asm-generic, x86: Add bitops instrumentation for KASAN Message-ID: <20190531160120.GB2646@lakrids.cambridge.arm.com> References: <20190531150828.157832-1-elver@google.com> <20190531150828.157832-4-elver@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190531150828.157832-4-elver@google.com> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 31, 2019 at 05:08:31PM +0200, Marco Elver wrote: > This adds a new header to asm-generic to allow optionally instrumenting > architecture-specific asm implementations of bitops. > > This change includes the required change for x86 as reference and > changes the kernel API doc to point to bitops-instrumented.h instead. > Rationale: the functions in x86's bitops.h are no longer the kernel API > functions, but instead the arch_ prefixed functions, which are then > instrumented via bitops-instrumented.h. > > Other architectures can similarly add support for asm implementations of > bitops. > > The documentation text was derived from x86 and existing bitops > asm-generic versions: 1) references to x86 have been removed; 2) as a > result, some of the text had to be reworded for clarity and consistency. > > Tested: using lib/test_kasan with bitops tests (pre-requisite patch). > > Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=198439 > Signed-off-by: Marco Elver > --- > Changes in v3: > * Remove references to 'x86' in API documentation; as a result, had to > reword doc text for clarify and consistency. > * Remove #ifdef, since it is assumed that if asm-generic bitops > implementations are used, bitops-instrumented.h is not needed. Thanks for sorting this out. FWIW: Acked-by: Mark Rutland Mark. > > Changes in v2: > * Instrument word-sized accesses, as specified by the interface. > --- > Documentation/core-api/kernel-api.rst | 2 +- > arch/x86/include/asm/bitops.h | 189 ++++------------ > include/asm-generic/bitops-instrumented.h | 263 ++++++++++++++++++++++ > 3 files changed, 302 insertions(+), 152 deletions(-) > create mode 100644 include/asm-generic/bitops-instrumented.h > > diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst > index a29c99d13331..65266fa1b706 100644 > --- a/Documentation/core-api/kernel-api.rst > +++ b/Documentation/core-api/kernel-api.rst > @@ -51,7 +51,7 @@ The Linux kernel provides more basic utility functions. > Bit Operations > -------------- > > -.. kernel-doc:: arch/x86/include/asm/bitops.h > +.. kernel-doc:: include/asm-generic/bitops-instrumented.h > :internal: > > Bitmap Operations > diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h > index 8e790ec219a5..ba15d53c1ca7 100644 > --- a/arch/x86/include/asm/bitops.h > +++ b/arch/x86/include/asm/bitops.h > @@ -49,23 +49,8 @@ > #define CONST_MASK_ADDR(nr, addr) WBYTE_ADDR((void *)(addr) + ((nr)>>3)) > #define CONST_MASK(nr) (1 << ((nr) & 7)) > > -/** > - * set_bit - Atomically set a bit in memory > - * @nr: the bit to set > - * @addr: the address to start counting from > - * > - * This function is atomic and may not be reordered. See __set_bit() > - * if you do not require the atomic guarantees. > - * > - * Note: there are no guarantees that this function will not be reordered > - * on non x86 architectures, so if you are writing portable code, > - * make sure not to rely on its reordering guarantees. > - * > - * Note that @nr may be almost arbitrarily large; this function is not > - * restricted to acting on a single-word quantity. > - */ > static __always_inline void > -set_bit(long nr, volatile unsigned long *addr) > +arch_set_bit(long nr, volatile unsigned long *addr) > { > if (IS_IMMEDIATE(nr)) { > asm volatile(LOCK_PREFIX "orb %1,%0" > @@ -78,32 +63,14 @@ set_bit(long nr, volatile unsigned long *addr) > } > } > > -/** > - * __set_bit - Set a bit in memory > - * @nr: the bit to set > - * @addr: the address to start counting from > - * > - * Unlike set_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline void __set_bit(long nr, volatile unsigned long *addr) > +static __always_inline void > +arch___set_bit(long nr, volatile unsigned long *addr) > { > asm volatile(__ASM_SIZE(bts) " %1,%0" : : ADDR, "Ir" (nr) : "memory"); > } > > -/** > - * clear_bit - Clears a bit in memory > - * @nr: Bit to clear > - * @addr: Address to start counting from > - * > - * clear_bit() is atomic and may not be reordered. However, it does > - * not contain a memory barrier, so if it is used for locking purposes, > - * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic() > - * in order to ensure changes are visible on other processors. > - */ > static __always_inline void > -clear_bit(long nr, volatile unsigned long *addr) > +arch_clear_bit(long nr, volatile unsigned long *addr) > { > if (IS_IMMEDIATE(nr)) { > asm volatile(LOCK_PREFIX "andb %1,%0" > @@ -115,26 +82,21 @@ clear_bit(long nr, volatile unsigned long *addr) > } > } > > -/* > - * clear_bit_unlock - Clears a bit in memory > - * @nr: Bit to clear > - * @addr: Address to start counting from > - * > - * clear_bit() is atomic and implies release semantics before the memory > - * operation. It can be used for an unlock. > - */ > -static __always_inline void clear_bit_unlock(long nr, volatile unsigned long *addr) > +static __always_inline void > +arch_clear_bit_unlock(long nr, volatile unsigned long *addr) > { > barrier(); > - clear_bit(nr, addr); > + arch_clear_bit(nr, addr); > } > > -static __always_inline void __clear_bit(long nr, volatile unsigned long *addr) > +static __always_inline void > +arch___clear_bit(long nr, volatile unsigned long *addr) > { > asm volatile(__ASM_SIZE(btr) " %1,%0" : : ADDR, "Ir" (nr) : "memory"); > } > > -static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) > +static __always_inline bool > +arch_clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) > { > bool negative; > asm volatile(LOCK_PREFIX "andb %2,%1" > @@ -143,48 +105,23 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile > : "ir" ((char) ~(1 << nr)) : "memory"); > return negative; > } > +#define arch_clear_bit_unlock_is_negative_byte \ > + arch_clear_bit_unlock_is_negative_byte > > -// Let everybody know we have it > -#define clear_bit_unlock_is_negative_byte clear_bit_unlock_is_negative_byte > - > -/* > - * __clear_bit_unlock - Clears a bit in memory > - * @nr: Bit to clear > - * @addr: Address to start counting from > - * > - * __clear_bit() is non-atomic and implies release semantics before the memory > - * operation. It can be used for an unlock if no other CPUs can concurrently > - * modify other bits in the word. > - */ > -static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) > +static __always_inline void > +arch___clear_bit_unlock(long nr, volatile unsigned long *addr) > { > - __clear_bit(nr, addr); > + arch___clear_bit(nr, addr); > } > > -/** > - * __change_bit - Toggle a bit in memory > - * @nr: the bit to change > - * @addr: the address to start counting from > - * > - * Unlike change_bit(), this function is non-atomic and may be reordered. > - * If it's called on the same region of memory simultaneously, the effect > - * may be that only one operation succeeds. > - */ > -static __always_inline void __change_bit(long nr, volatile unsigned long *addr) > +static __always_inline void > +arch___change_bit(long nr, volatile unsigned long *addr) > { > asm volatile(__ASM_SIZE(btc) " %1,%0" : : ADDR, "Ir" (nr) : "memory"); > } > > -/** > - * change_bit - Toggle a bit in memory > - * @nr: Bit to change > - * @addr: Address to start counting from > - * > - * change_bit() is atomic and may not be reordered. > - * Note that @nr may be almost arbitrarily large; this function is not > - * restricted to acting on a single-word quantity. > - */ > -static __always_inline void change_bit(long nr, volatile unsigned long *addr) > +static __always_inline void > +arch_change_bit(long nr, volatile unsigned long *addr) > { > if (IS_IMMEDIATE(nr)) { > asm volatile(LOCK_PREFIX "xorb %1,%0" > @@ -196,42 +133,20 @@ static __always_inline void change_bit(long nr, volatile unsigned long *addr) > } > } > > -/** > - * test_and_set_bit - Set a bit and return its old value > - * @nr: Bit to set > - * @addr: Address to count from > - * > - * This operation is atomic and cannot be reordered. > - * It also implies a memory barrier. > - */ > -static __always_inline bool test_and_set_bit(long nr, volatile unsigned long *addr) > +static __always_inline bool > +arch_test_and_set_bit(long nr, volatile unsigned long *addr) > { > return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(bts), *addr, c, "Ir", nr); > } > > -/** > - * test_and_set_bit_lock - Set a bit and return its old value for lock > - * @nr: Bit to set > - * @addr: Address to count from > - * > - * This is the same as test_and_set_bit on x86. > - */ > static __always_inline bool > -test_and_set_bit_lock(long nr, volatile unsigned long *addr) > +arch_test_and_set_bit_lock(long nr, volatile unsigned long *addr) > { > - return test_and_set_bit(nr, addr); > + return arch_test_and_set_bit(nr, addr); > } > > -/** > - * __test_and_set_bit - Set a bit and return its old value > - * @nr: Bit to set > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - */ > -static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) > +static __always_inline bool > +arch___test_and_set_bit(long nr, volatile unsigned long *addr) > { > bool oldbit; > > @@ -242,28 +157,13 @@ static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long * > return oldbit; > } > > -/** > - * test_and_clear_bit - Clear a bit and return its old value > - * @nr: Bit to clear > - * @addr: Address to count from > - * > - * This operation is atomic and cannot be reordered. > - * It also implies a memory barrier. > - */ > -static __always_inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) > +static __always_inline bool > +arch_test_and_clear_bit(long nr, volatile unsigned long *addr) > { > return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btr), *addr, c, "Ir", nr); > } > > -/** > - * __test_and_clear_bit - Clear a bit and return its old value > - * @nr: Bit to clear > - * @addr: Address to count from > - * > - * This operation is non-atomic and can be reordered. > - * If two examples of this operation race, one can appear to succeed > - * but actually fail. You must protect multiple accesses with a lock. > - * > +/* > * Note: the operation is performed atomically with respect to > * the local CPU, but not other CPUs. Portable code should not > * rely on this behaviour. > @@ -271,7 +171,8 @@ static __always_inline bool test_and_clear_bit(long nr, volatile unsigned long * > * accessed from a hypervisor on the same CPU if running in a VM: don't change > * this without also updating arch/x86/kernel/kvm.c > */ > -static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) > +static __always_inline bool > +arch___test_and_clear_bit(long nr, volatile unsigned long *addr) > { > bool oldbit; > > @@ -282,8 +183,8 @@ static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long > return oldbit; > } > > -/* WARNING: non atomic and it can be reordered! */ > -static __always_inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) > +static __always_inline bool > +arch___test_and_change_bit(long nr, volatile unsigned long *addr) > { > bool oldbit; > > @@ -295,15 +196,8 @@ static __always_inline bool __test_and_change_bit(long nr, volatile unsigned lon > return oldbit; > } > > -/** > - * test_and_change_bit - Change a bit and return its old value > - * @nr: Bit to change > - * @addr: Address to count from > - * > - * This operation is atomic and cannot be reordered. > - * It also implies a memory barrier. > - */ > -static __always_inline bool test_and_change_bit(long nr, volatile unsigned long *addr) > +static __always_inline bool > +arch_test_and_change_bit(long nr, volatile unsigned long *addr) > { > return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btc), *addr, c, "Ir", nr); > } > @@ -326,16 +220,7 @@ static __always_inline bool variable_test_bit(long nr, volatile const unsigned l > return oldbit; > } > > -#if 0 /* Fool kernel-doc since it doesn't do macros yet */ > -/** > - * test_bit - Determine whether a bit is set > - * @nr: bit number to test > - * @addr: Address to start counting from > - */ > -static bool test_bit(int nr, const volatile unsigned long *addr); > -#endif > - > -#define test_bit(nr, addr) \ > +#define arch_test_bit(nr, addr) \ > (__builtin_constant_p((nr)) \ > ? constant_test_bit((nr), (addr)) \ > : variable_test_bit((nr), (addr))) > @@ -504,6 +389,8 @@ static __always_inline int fls64(__u64 x) > > #include > > +#include > + > #include > > #include > diff --git a/include/asm-generic/bitops-instrumented.h b/include/asm-generic/bitops-instrumented.h > new file mode 100644 > index 000000000000..ddd1c6d9d8db > --- /dev/null > +++ b/include/asm-generic/bitops-instrumented.h > @@ -0,0 +1,263 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > + > +/* > + * This file provides wrappers with sanitizer instrumentation for bit > + * operations. > + * > + * To use this functionality, an arch's bitops.h file needs to define each of > + * the below bit operations with an arch_ prefix (e.g. arch_set_bit(), > + * arch___set_bit(), etc.). > + */ > +#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_H > +#define _ASM_GENERIC_BITOPS_INSTRUMENTED_H > + > +#include > + > +/** > + * set_bit - Atomically set a bit in memory > + * @nr: the bit to set > + * @addr: the address to start counting from > + * > + * This is a relaxed atomic operation (no implied memory barriers). > + * > + * Note that @nr may be almost arbitrarily large; this function is not > + * restricted to acting on a single-word quantity. > + */ > +static inline void set_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch_set_bit(nr, addr); > +} > + > +/** > + * __set_bit - Set a bit in memory > + * @nr: the bit to set > + * @addr: the address to start counting from > + * > + * Unlike set_bit(), this function is non-atomic. If it is called on the same > + * region of memory concurrently, the effect may be that only one operation > + * succeeds. > + */ > +static inline void __set_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch___set_bit(nr, addr); > +} > + > +/** > + * clear_bit - Clears a bit in memory > + * @nr: Bit to clear > + * @addr: Address to start counting from > + * > + * This is a relaxed atomic operation (no implied memory barriers). > + */ > +static inline void clear_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch_clear_bit(nr, addr); > +} > + > +/** > + * __clear_bit - Clears a bit in memory > + * @nr: the bit to clear > + * @addr: the address to start counting from > + * > + * Unlike clear_bit(), this function is non-atomic. If it is called on the same > + * region of memory concurrently, the effect may be that only one operation > + * succeeds. > + */ > +static inline void __clear_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch___clear_bit(nr, addr); > +} > + > +/** > + * clear_bit_unlock - Clear a bit in memory, for unlock > + * @nr: the bit to set > + * @addr: the address to start counting from > + * > + * This operation is atomic and provides release barrier semantics. > + */ > +static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch_clear_bit_unlock(nr, addr); > +} > + > +/** > + * __clear_bit_unlock - Clears a bit in memory > + * @nr: Bit to clear > + * @addr: Address to start counting from > + * > + * This is a non-atomic operation but implies a release barrier before the > + * memory operation. It can be used for an unlock if no other CPUs can > + * concurrently modify other bits in the word. > + */ > +static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch___clear_bit_unlock(nr, addr); > +} > + > +/** > + * change_bit - Toggle a bit in memory > + * @nr: Bit to change > + * @addr: Address to start counting from > + * > + * This is a relaxed atomic operation (no implied memory barriers). > + * > + * Note that @nr may be almost arbitrarily large; this function is not > + * restricted to acting on a single-word quantity. > + */ > +static inline void change_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch_change_bit(nr, addr); > +} > + > +/** > + * __change_bit - Toggle a bit in memory > + * @nr: the bit to change > + * @addr: the address to start counting from > + * > + * Unlike change_bit(), this function is non-atomic. If it is called on the same > + * region of memory concurrently, the effect may be that only one operation > + * succeeds. > + */ > +static inline void __change_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + arch___change_bit(nr, addr); > +} > + > +/** > + * test_and_set_bit - Set a bit and return its old value > + * @nr: Bit to set > + * @addr: Address to count from > + * > + * This is an atomic fully-ordered operation (implied full memory barrier). > + */ > +static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch_test_and_set_bit(nr, addr); > +} > + > +/** > + * __test_and_set_bit - Set a bit and return its old value > + * @nr: Bit to set > + * @addr: Address to count from > + * > + * This operation is non-atomic. If two instances of this operation race, one > + * can appear to succeed but actually fail. > + */ > +static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch___test_and_set_bit(nr, addr); > +} > + > +/** > + * test_and_set_bit_lock - Set a bit and return its old value, for lock > + * @nr: Bit to set > + * @addr: Address to count from > + * > + * This operation is atomic and provides acquire barrier semantics if > + * the returned value is 0. > + * It can be used to implement bit locks. > + */ > +static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch_test_and_set_bit_lock(nr, addr); > +} > + > +/** > + * test_and_clear_bit - Clear a bit and return its old value > + * @nr: Bit to clear > + * @addr: Address to count from > + * > + * This is an atomic fully-ordered operation (implied full memory barrier). > + */ > +static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch_test_and_clear_bit(nr, addr); > +} > + > +/** > + * __test_and_clear_bit - Clear a bit and return its old value > + * @nr: Bit to clear > + * @addr: Address to count from > + * > + * This operation is non-atomic. If two instances of this operation race, one > + * can appear to succeed but actually fail. > + */ > +static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch___test_and_clear_bit(nr, addr); > +} > + > +/** > + * test_and_change_bit - Change a bit and return its old value > + * @nr: Bit to change > + * @addr: Address to count from > + * > + * This is an atomic fully-ordered operation (implied full memory barrier). > + */ > +static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch_test_and_change_bit(nr, addr); > +} > + > +/** > + * __test_and_change_bit - Change a bit and return its old value > + * @nr: Bit to change > + * @addr: Address to count from > + * > + * This operation is non-atomic. If two instances of this operation race, one > + * can appear to succeed but actually fail. > + */ > +static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch___test_and_change_bit(nr, addr); > +} > + > +/** > + * test_bit - Determine whether a bit is set > + * @nr: bit number to test > + * @addr: Address to start counting from > + */ > +static inline bool test_bit(long nr, const volatile unsigned long *addr) > +{ > + kasan_check_read(addr + BIT_WORD(nr), sizeof(long)); > + return arch_test_bit(nr, addr); > +} > + > +#if defined(arch_clear_bit_unlock_is_negative_byte) > +/** > + * clear_bit_unlock_is_negative_byte - Clear a bit in memory and test if bottom > + * byte is negative, for unlock. > + * @nr: the bit to clear > + * @addr: the address to start counting from > + * > + * This operation is atomic and provides release barrier semantics. > + * > + * This is a bit of a one-trick-pony for the filemap code, which clears > + * PG_locked and tests PG_waiters, > + */ > +static inline bool > +clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) > +{ > + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); > + return arch_clear_bit_unlock_is_negative_byte(nr, addr); > +} > +/* Let everybody know we have it. */ > +#define clear_bit_unlock_is_negative_byte clear_bit_unlock_is_negative_byte > +#endif > + > +#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_H */ > -- > 2.22.0.rc1.257.g3120a18244-goog >