Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp6409965ybi; Wed, 29 May 2019 07:27:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqynubDHZmAald+cbO9v3x3pRtE++ZyPKM86WuwXYPjO21F7CSpR/UUiKnZNIyJPUes2jwhP X-Received: by 2002:a62:386:: with SMTP id 128mr126189659pfd.10.1559140029533; Wed, 29 May 2019 07:27:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559140029; cv=none; d=google.com; s=arc-20160816; b=ZT8TyZ7RM+4iNz+8o4C1fReIoMiHCeitN6A3XAtFEXiCpaT3giKW7PbCVKd3ydB/G9 qQE31ZAmBPascNZ9FEK7IpKzxKrODRe/LdYPbf86c0OpPOM/xdeMAYeznWNB1LCgtrIg mNJg9qS2dgGTWMxDeEHo8uHfO2YfzrgUHycLBadCG82fiG/ss1LdWbZp3rNmx1Cniw0D 9g6bnCOAKcUKdHqeJtADgUfyMVPqjdpr9CaOvH/jfJxk1GDoWLhzmQLd7SQDbXVipOx1 FYEFtKmhXVQ+2hYrhATgtsO3/43E6NNFPIPhhpEQejTTGUMTmeKdG2eNwU60SMi/HF7C Vqzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:dkim-signature; bh=VsfX0Vngnu+bLyWJxnLCfxl9j79fcyn/7gCV2ifA24k=; b=F9m/99LSksOZpaZnh1xKR26NtAAwDZcPYOddEmlSv6AGle8rtpdFJPKCnLd/1xjQvX rQGuBfRm2lQdwvSMWFJGLr5NqRayFjVvfaTwbrVJBrIzYZAXNC1+A3kQQDoVjqiXQunR hXUKSdbR+4+dNOEn3ik7STcT/KpVVhTrsYMVxLNM2Y2iwYyBl0qCg7FueqieGV6mxqwT 9Htd/3IX0EmgYx4jVfK9G3afurOluDqzMMqFuaSOwd7s/xAv3lBW2iVdQO/2Z8WhQ6Fi 4VP7p1Pboi2Z3BizAVQK7j9OReeZ7o1hJKcMZKnO3KnZe3Jk0qSoLnXn01N2arzqUDvK 8FiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=In93pZSG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b6si28215402pfa.36.2019.05.29.07.26.52; Wed, 29 May 2019 07:27:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=In93pZSG; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727504AbfE2OXt (ORCPT + 99 others); Wed, 29 May 2019 10:23:49 -0400 Received: from mail-yw1-f74.google.com ([209.85.161.74]:35926 "EHLO mail-yw1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726792AbfE2OXs (ORCPT ); Wed, 29 May 2019 10:23:48 -0400 Received: by mail-yw1-f74.google.com with SMTP id p123so2276729ywg.3 for ; Wed, 29 May 2019 07:23:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VsfX0Vngnu+bLyWJxnLCfxl9j79fcyn/7gCV2ifA24k=; b=In93pZSGu3szlbPKcmH4QFN8FakbzXUNBYjrYsjnSZ39qRKxC6ayNxLvFLzdKaBwhi VryakQ9ESdc1M4+oDC0J6MZW1L3TKbEeDHys6YlTrIps7qRyGgFm/oIqLFQVzGpeqL11 mtRN96gn1i/7qfISW72Zl+4VuX/nDHtCyb3cyonbKaZ2y9Cs/8JQDmEMADKuuoaEQokX hsvVYYyG4y+SDyxVBRl4iLPSDY9R8pdUJFdBszwTBYpxuQ94auGoLApd23G0Vu4u6z5X v8dnC9g4YAtjknMGf/m8mHrB9TMTzEITGab81YC+J6kqwJAhlMwqGo3nFlYyUwyVcIV2 UvIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VsfX0Vngnu+bLyWJxnLCfxl9j79fcyn/7gCV2ifA24k=; b=cxKwHoA9jeomYHDLnXCWUFdLltlCo13WTqcj/NrRo3nQ9zgwDkP1uW0v+QtqndCVs/ 9zmPvXpwo0qWXyRGyn9rO2Mwm3YAJi3bSIqfDih8rx8NBbKSDGJuMs78/aGyMUV//88X ygskB19G6rgot2GJ6JNh8eWNMof2uZXTE8EDTQdd7jC6YXl5hmEsDL31YCOrEGeSb+Ki eQkGeP4Hdt4lctlXUWkNKlM/7AY3iC7y6691eJgdJxd/icZjGci9Rh02fGAw3xgdsfqY qrKmV1X9alKDJS6LDvOuxp/MchyudIb6FqQixX8K81aQte30N5Sv2txpbVnVycXa/NAJ QLyQ== X-Gm-Message-State: APjAAAWXchUg+hJNR9tRSmZ8LievorcimhW/cQ0XXr1ZB8FlTewA7uBw fAn7t8b6jkNqusB6e5wbRs6Grtn6zQ== X-Received: by 2002:a25:1484:: with SMTP id 126mr58518877ybu.61.1559139827384; Wed, 29 May 2019 07:23:47 -0700 (PDT) Date: Wed, 29 May 2019 16:15:01 +0200 In-Reply-To: <20190529141500.193390-1-elver@google.com> Message-Id: <20190529141500.193390-4-elver@google.com> Mime-Version: 1.0 References: <20190529141500.193390-1-elver@google.com> X-Mailer: git-send-email 2.22.0.rc1.257.g3120a18244-goog Subject: [PATCH v2 3/3] asm-generic, x86: Add bitops instrumentation for KASAN From: Marco Elver To: peterz@infradead.org, aryabinin@virtuozzo.com, dvyukov@google.com, glider@google.com, andreyknvl@google.com, mark.rutland@arm.com Cc: corbet@lwn.net, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org, arnd@arndb.de, jpoimboe@redhat.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, kasan-dev@googlegroups.com, Marco Elver Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This adds a new header to asm-generic to allow optionally instrumenting architecture-specific asm implementations of bitops. This change includes the required change for x86 as reference and changes the kernel API doc to point to bitops-instrumented.h instead. Rationale: the functions in x86's bitops.h are no longer the kernel API functions, but instead the arch_ prefixed functions, which are then instrumented via bitops-instrumented.h. Other architectures can similarly add support for asm implementations of bitops. The documentation text has been copied/moved, and *no* changes to it have been made in this patch. Tested: using lib/test_kasan with bitops tests (pre-requisite patch). Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=198439 Signed-off-by: Marco Elver --- Changes in v2: * Instrument word-sized accesses, as specified by the interface. --- Documentation/core-api/kernel-api.rst | 2 +- arch/x86/include/asm/bitops.h | 210 ++++---------- include/asm-generic/bitops-instrumented.h | 317 ++++++++++++++++++++++ 3 files changed, 370 insertions(+), 159 deletions(-) create mode 100644 include/asm-generic/bitops-instrumented.h diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst index a29c99d13331..65266fa1b706 100644 --- a/Documentation/core-api/kernel-api.rst +++ b/Documentation/core-api/kernel-api.rst @@ -51,7 +51,7 @@ The Linux kernel provides more basic utility functions. Bit Operations -------------- -.. kernel-doc:: arch/x86/include/asm/bitops.h +.. kernel-doc:: include/asm-generic/bitops-instrumented.h :internal: Bitmap Operations diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h index 8e790ec219a5..8ebf7af9a0f4 100644 --- a/arch/x86/include/asm/bitops.h +++ b/arch/x86/include/asm/bitops.h @@ -49,23 +49,8 @@ #define CONST_MASK_ADDR(nr, addr) WBYTE_ADDR((void *)(addr) + ((nr)>>3)) #define CONST_MASK(nr) (1 << ((nr) & 7)) -/** - * set_bit - Atomically set a bit in memory - * @nr: the bit to set - * @addr: the address to start counting from - * - * This function is atomic and may not be reordered. See __set_bit() - * if you do not require the atomic guarantees. - * - * Note: there are no guarantees that this function will not be reordered - * on non x86 architectures, so if you are writing portable code, - * make sure not to rely on its reordering guarantees. - * - * Note that @nr may be almost arbitrarily large; this function is not - * restricted to acting on a single-word quantity. - */ static __always_inline void -set_bit(long nr, volatile unsigned long *addr) +arch_set_bit(long nr, volatile unsigned long *addr) { if (IS_IMMEDIATE(nr)) { asm volatile(LOCK_PREFIX "orb %1,%0" @@ -77,33 +62,17 @@ set_bit(long nr, volatile unsigned long *addr) : : RLONG_ADDR(addr), "Ir" (nr) : "memory"); } } +#define arch_set_bit arch_set_bit -/** - * __set_bit - Set a bit in memory - * @nr: the bit to set - * @addr: the address to start counting from - * - * Unlike set_bit(), this function is non-atomic and may be reordered. - * If it's called on the same region of memory simultaneously, the effect - * may be that only one operation succeeds. - */ -static __always_inline void __set_bit(long nr, volatile unsigned long *addr) +static __always_inline void +arch___set_bit(long nr, volatile unsigned long *addr) { asm volatile(__ASM_SIZE(bts) " %1,%0" : : ADDR, "Ir" (nr) : "memory"); } +#define arch___set_bit arch___set_bit -/** - * clear_bit - Clears a bit in memory - * @nr: Bit to clear - * @addr: Address to start counting from - * - * clear_bit() is atomic and may not be reordered. However, it does - * not contain a memory barrier, so if it is used for locking purposes, - * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic() - * in order to ensure changes are visible on other processors. - */ static __always_inline void -clear_bit(long nr, volatile unsigned long *addr) +arch_clear_bit(long nr, volatile unsigned long *addr) { if (IS_IMMEDIATE(nr)) { asm volatile(LOCK_PREFIX "andb %1,%0" @@ -114,27 +83,25 @@ clear_bit(long nr, volatile unsigned long *addr) : : RLONG_ADDR(addr), "Ir" (nr) : "memory"); } } +#define arch_clear_bit arch_clear_bit -/* - * clear_bit_unlock - Clears a bit in memory - * @nr: Bit to clear - * @addr: Address to start counting from - * - * clear_bit() is atomic and implies release semantics before the memory - * operation. It can be used for an unlock. - */ -static __always_inline void clear_bit_unlock(long nr, volatile unsigned long *addr) +static __always_inline void +arch_clear_bit_unlock(long nr, volatile unsigned long *addr) { barrier(); - clear_bit(nr, addr); + arch_clear_bit(nr, addr); } +#define arch_clear_bit_unlock arch_clear_bit_unlock -static __always_inline void __clear_bit(long nr, volatile unsigned long *addr) +static __always_inline void +arch___clear_bit(long nr, volatile unsigned long *addr) { asm volatile(__ASM_SIZE(btr) " %1,%0" : : ADDR, "Ir" (nr) : "memory"); } +#define arch___clear_bit arch___clear_bit -static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) +static __always_inline bool +arch_clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) { bool negative; asm volatile(LOCK_PREFIX "andb %2,%1" @@ -143,48 +110,25 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile : "ir" ((char) ~(1 << nr)) : "memory"); return negative; } +#define arch_clear_bit_unlock_is_negative_byte \ + arch_clear_bit_unlock_is_negative_byte -// Let everybody know we have it -#define clear_bit_unlock_is_negative_byte clear_bit_unlock_is_negative_byte - -/* - * __clear_bit_unlock - Clears a bit in memory - * @nr: Bit to clear - * @addr: Address to start counting from - * - * __clear_bit() is non-atomic and implies release semantics before the memory - * operation. It can be used for an unlock if no other CPUs can concurrently - * modify other bits in the word. - */ -static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) +static __always_inline void +arch___clear_bit_unlock(long nr, volatile unsigned long *addr) { - __clear_bit(nr, addr); + arch___clear_bit(nr, addr); } +#define arch___clear_bit_unlock arch___clear_bit_unlock -/** - * __change_bit - Toggle a bit in memory - * @nr: the bit to change - * @addr: the address to start counting from - * - * Unlike change_bit(), this function is non-atomic and may be reordered. - * If it's called on the same region of memory simultaneously, the effect - * may be that only one operation succeeds. - */ -static __always_inline void __change_bit(long nr, volatile unsigned long *addr) +static __always_inline void +arch___change_bit(long nr, volatile unsigned long *addr) { asm volatile(__ASM_SIZE(btc) " %1,%0" : : ADDR, "Ir" (nr) : "memory"); } +#define arch___change_bit arch___change_bit -/** - * change_bit - Toggle a bit in memory - * @nr: Bit to change - * @addr: Address to start counting from - * - * change_bit() is atomic and may not be reordered. - * Note that @nr may be almost arbitrarily large; this function is not - * restricted to acting on a single-word quantity. - */ -static __always_inline void change_bit(long nr, volatile unsigned long *addr) +static __always_inline void +arch_change_bit(long nr, volatile unsigned long *addr) { if (IS_IMMEDIATE(nr)) { asm volatile(LOCK_PREFIX "xorb %1,%0" @@ -195,43 +139,24 @@ static __always_inline void change_bit(long nr, volatile unsigned long *addr) : : RLONG_ADDR(addr), "Ir" (nr) : "memory"); } } +#define arch_change_bit arch_change_bit -/** - * test_and_set_bit - Set a bit and return its old value - * @nr: Bit to set - * @addr: Address to count from - * - * This operation is atomic and cannot be reordered. - * It also implies a memory barrier. - */ -static __always_inline bool test_and_set_bit(long nr, volatile unsigned long *addr) +static __always_inline bool +arch_test_and_set_bit(long nr, volatile unsigned long *addr) { return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(bts), *addr, c, "Ir", nr); } +#define arch_test_and_set_bit arch_test_and_set_bit -/** - * test_and_set_bit_lock - Set a bit and return its old value for lock - * @nr: Bit to set - * @addr: Address to count from - * - * This is the same as test_and_set_bit on x86. - */ static __always_inline bool -test_and_set_bit_lock(long nr, volatile unsigned long *addr) +arch_test_and_set_bit_lock(long nr, volatile unsigned long *addr) { - return test_and_set_bit(nr, addr); + return arch_test_and_set_bit(nr, addr); } +#define arch_test_and_set_bit_lock arch_test_and_set_bit_lock -/** - * __test_and_set_bit - Set a bit and return its old value - * @nr: Bit to set - * @addr: Address to count from - * - * This operation is non-atomic and can be reordered. - * If two examples of this operation race, one can appear to succeed - * but actually fail. You must protect multiple accesses with a lock. - */ -static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) +static __always_inline bool +arch___test_and_set_bit(long nr, volatile unsigned long *addr) { bool oldbit; @@ -241,37 +166,17 @@ static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long * : ADDR, "Ir" (nr) : "memory"); return oldbit; } +#define arch___test_and_set_bit arch___test_and_set_bit -/** - * test_and_clear_bit - Clear a bit and return its old value - * @nr: Bit to clear - * @addr: Address to count from - * - * This operation is atomic and cannot be reordered. - * It also implies a memory barrier. - */ -static __always_inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) +static __always_inline bool +arch_test_and_clear_bit(long nr, volatile unsigned long *addr) { return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btr), *addr, c, "Ir", nr); } +#define arch_test_and_clear_bit arch_test_and_clear_bit -/** - * __test_and_clear_bit - Clear a bit and return its old value - * @nr: Bit to clear - * @addr: Address to count from - * - * This operation is non-atomic and can be reordered. - * If two examples of this operation race, one can appear to succeed - * but actually fail. You must protect multiple accesses with a lock. - * - * Note: the operation is performed atomically with respect to - * the local CPU, but not other CPUs. Portable code should not - * rely on this behaviour. - * KVM relies on this behaviour on x86 for modifying memory that is also - * accessed from a hypervisor on the same CPU if running in a VM: don't change - * this without also updating arch/x86/kernel/kvm.c - */ -static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) +static __always_inline bool +arch___test_and_clear_bit(long nr, volatile unsigned long *addr) { bool oldbit; @@ -281,9 +186,10 @@ static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long : ADDR, "Ir" (nr) : "memory"); return oldbit; } +#define arch___test_and_clear_bit arch___test_and_clear_bit -/* WARNING: non atomic and it can be reordered! */ -static __always_inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) +static __always_inline bool +arch___test_and_change_bit(long nr, volatile unsigned long *addr) { bool oldbit; @@ -294,19 +200,14 @@ static __always_inline bool __test_and_change_bit(long nr, volatile unsigned lon return oldbit; } +#define arch___test_and_change_bit arch___test_and_change_bit -/** - * test_and_change_bit - Change a bit and return its old value - * @nr: Bit to change - * @addr: Address to count from - * - * This operation is atomic and cannot be reordered. - * It also implies a memory barrier. - */ -static __always_inline bool test_and_change_bit(long nr, volatile unsigned long *addr) +static __always_inline bool +arch_test_and_change_bit(long nr, volatile unsigned long *addr) { return GEN_BINARY_RMWcc(LOCK_PREFIX __ASM_SIZE(btc), *addr, c, "Ir", nr); } +#define arch_test_and_change_bit arch_test_and_change_bit static __always_inline bool constant_test_bit(long nr, const volatile unsigned long *addr) { @@ -326,16 +227,7 @@ static __always_inline bool variable_test_bit(long nr, volatile const unsigned l return oldbit; } -#if 0 /* Fool kernel-doc since it doesn't do macros yet */ -/** - * test_bit - Determine whether a bit is set - * @nr: bit number to test - * @addr: Address to start counting from - */ -static bool test_bit(int nr, const volatile unsigned long *addr); -#endif - -#define test_bit(nr, addr) \ +#define arch_test_bit(nr, addr) \ (__builtin_constant_p((nr)) \ ? constant_test_bit((nr), (addr)) \ : variable_test_bit((nr), (addr))) @@ -504,6 +396,8 @@ static __always_inline int fls64(__u64 x) #include +#include + #include #include diff --git a/include/asm-generic/bitops-instrumented.h b/include/asm-generic/bitops-instrumented.h new file mode 100644 index 000000000000..b01b0dd93964 --- /dev/null +++ b/include/asm-generic/bitops-instrumented.h @@ -0,0 +1,317 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * This file provides wrappers with sanitizer instrumentation for bit + * operations. + * + * To use this functionality, an arch's bitops.h file needs to define each of + * the below bit operations with an arch_ prefix (e.g. arch_set_bit(), + * arch___set_bit(), etc.), #define each provided arch_ function, and include + * this file after their definitions. For undefined arch_ functions, it is + * assumed that they are provided via asm-generic/bitops, which are implicitly + * instrumented. + */ +#ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_H +#define _ASM_GENERIC_BITOPS_INSTRUMENTED_H + +#include + +#if defined(arch_set_bit) +/** + * set_bit - Atomically set a bit in memory + * @nr: the bit to set + * @addr: the address to start counting from + * + * This function is atomic and may not be reordered. See __set_bit() + * if you do not require the atomic guarantees. + * + * Note: there are no guarantees that this function will not be reordered + * on non x86 architectures, so if you are writing portable code, + * make sure not to rely on its reordering guarantees. + * + * Note that @nr may be almost arbitrarily large; this function is not + * restricted to acting on a single-word quantity. + */ +static inline void set_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch_set_bit(nr, addr); +} +#endif + +#if defined(arch___set_bit) +/** + * __set_bit - Set a bit in memory + * @nr: the bit to set + * @addr: the address to start counting from + * + * Unlike set_bit(), this function is non-atomic and may be reordered. + * If it's called on the same region of memory simultaneously, the effect + * may be that only one operation succeeds. + */ +static inline void __set_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch___set_bit(nr, addr); +} +#endif + +#if defined(arch_clear_bit) +/** + * clear_bit - Clears a bit in memory + * @nr: Bit to clear + * @addr: Address to start counting from + * + * clear_bit() is atomic and may not be reordered. However, it does + * not contain a memory barrier, so if it is used for locking purposes, + * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic() + * in order to ensure changes are visible on other processors. + */ +static inline void clear_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch_clear_bit(nr, addr); +} +#endif + +#if defined(arch___clear_bit) +/** + * __clear_bit - Clears a bit in memory + * @nr: the bit to clear + * @addr: the address to start counting from + * + * Unlike clear_bit(), this function is non-atomic and may be reordered. + * If it's called on the same region of memory simultaneously, the effect + * may be that only one operation succeeds. + */ +static inline void __clear_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch___clear_bit(nr, addr); +} +#endif + +#if defined(arch_clear_bit_unlock) +/** + * clear_bit_unlock - Clears a bit in memory + * @nr: Bit to clear + * @addr: Address to start counting from + * + * clear_bit_unlock() is atomic and implies release semantics before the memory + * operation. It can be used for an unlock. + */ +static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch_clear_bit_unlock(nr, addr); +} +#endif + +#if defined(arch___clear_bit_unlock) +/** + * __clear_bit_unlock - Clears a bit in memory + * @nr: Bit to clear + * @addr: Address to start counting from + * + * __clear_bit_unlock() is non-atomic and implies release semantics before the + * memory operation. It can be used for an unlock if no other CPUs can + * concurrently modify other bits in the word. + */ +static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch___clear_bit_unlock(nr, addr); +} +#endif + +#if defined(arch_change_bit) +/** + * change_bit - Toggle a bit in memory + * @nr: Bit to change + * @addr: Address to start counting from + * + * change_bit() is atomic and may not be reordered. + * Note that @nr may be almost arbitrarily large; this function is not + * restricted to acting on a single-word quantity. + */ +static inline void change_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch_change_bit(nr, addr); +} +#endif + +#if defined(arch___change_bit) +/** + * __change_bit - Toggle a bit in memory + * @nr: the bit to change + * @addr: the address to start counting from + * + * Unlike change_bit(), this function is non-atomic and may be reordered. + * If it's called on the same region of memory simultaneously, the effect + * may be that only one operation succeeds. + */ +static inline void __change_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + arch___change_bit(nr, addr); +} +#endif + +#if defined(arch_test_and_set_bit) +/** + * test_and_set_bit - Set a bit and return its old value + * @nr: Bit to set + * @addr: Address to count from + * + * This operation is atomic and cannot be reordered. + * It also implies a memory barrier. + */ +static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch_test_and_set_bit(nr, addr); +} +#endif + +#if defined(arch___test_and_set_bit) +/** + * __test_and_set_bit - Set a bit and return its old value + * @nr: Bit to set + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch___test_and_set_bit(nr, addr); +} +#endif + +#if defined(arch_test_and_set_bit_lock) +/** + * test_and_set_bit_lock - Set a bit and return its old value, for lock + * @nr: Bit to set + * @addr: Address to count from + * + * This operation is atomic and provides acquire barrier semantics if + * the returned value is 0. + * It can be used to implement bit locks. + */ +static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch_test_and_set_bit_lock(nr, addr); +} +#endif + +#if defined(arch_test_and_clear_bit) +/** + * test_and_clear_bit - Clear a bit and return its old value + * @nr: Bit to clear + * @addr: Address to count from + * + * This operation is atomic and cannot be reordered. + * It also implies a memory barrier. + */ +static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch_test_and_clear_bit(nr, addr); +} +#endif + +#if defined(arch___test_and_clear_bit) +/** + * __test_and_clear_bit - Clear a bit and return its old value + * @nr: Bit to clear + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + * + * Note: the operation is performed atomically with respect to + * the local CPU, but not other CPUs. Portable code should not + * rely on this behaviour. + * KVM relies on this behaviour on x86 for modifying memory that is also + * accessed from a hypervisor on the same CPU if running in a VM: don't change + * this without also updating arch/x86/kernel/kvm.c + */ +static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch___test_and_clear_bit(nr, addr); +} +#endif + +#if defined(arch_test_and_change_bit) +/** + * test_and_change_bit - Change a bit and return its old value + * @nr: Bit to change + * @addr: Address to count from + * + * This operation is atomic and cannot be reordered. + * It also implies a memory barrier. + */ +static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch_test_and_change_bit(nr, addr); +} +#endif + +#if defined(arch___test_and_change_bit) +/** + * __test_and_change_bit - Change a bit and return its old value + * @nr: Bit to change + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch___test_and_change_bit(nr, addr); +} +#endif + +#if defined(arch_test_bit) +/** + * test_bit - Determine whether a bit is set + * @nr: bit number to test + * @addr: Address to start counting from + */ +static inline bool test_bit(long nr, const volatile unsigned long *addr) +{ + kasan_check_read(addr + BIT_WORD(nr), sizeof(long)); + return arch_test_bit(nr, addr); +} +#endif + +#if defined(arch_clear_bit_unlock_is_negative_byte) +/** + * clear_bit_unlock_is_negative_byte - Clear a bit in memory and test if bottom + * byte is negative, for unlock. + * @nr: the bit to clear + * @addr: the address to start counting from + * + * This is a bit of a one-trick-pony for the filemap code, which clears + * PG_locked and tests PG_waiters, + */ +static inline bool +clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) +{ + kasan_check_write(addr + BIT_WORD(nr), sizeof(long)); + return arch_clear_bit_unlock_is_negative_byte(nr, addr); +} +/* Let everybody know we have it. */ +#define clear_bit_unlock_is_negative_byte clear_bit_unlock_is_negative_byte +#endif + +#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_H */ -- 2.22.0.rc1.257.g3120a18244-goog