2023-11-18 15:51:37

by Yury Norov

[permalink] [raw]
Subject: [PATCH 00/34] biops: add atomig find_bit() operations

Add helpers around test_and_{set,clear}_bit() that allow to search for
clear or set bits and flip them atomically.

The target patterns may look like this:

for (idx = 0; idx < nbits; idx++)
if (test_and_clear_bit(idx, bitmap))
do_something(idx);

Or like this:

do {
bit = find_first_bit(bitmap, nbits);
if (bit >= nbits)
return nbits;
} while (!test_and_clear_bit(bit, bitmap));
return bit;

In both cases, the opencoded loop may be converted to a single function
or iterator call. Correspondingly:

for_each_test_and_clear_bit(idx, bitmap, nbits)
do_something(idx);

Or:
return find_and_clear_bit(bitmap, nbits);

Obviously, the less routine code people have write themself, the less
probability to make a mistake. Patch #31 of this series fixes one such
error in perf/m1 codebase.

Those are not only handy helpers but also resolve a non-trivial
issue of using non-atomic find_bit() together with atomic
test_and_{set,clear)_bit().

The trick is that find_bit() implies that the bitmap is a regular
non-volatile piece of memory, and compiler is allowed to use such
optimization techniques like re-fetching memory instead of caching it.

For example, find_first_bit() is implemented like this:

for (idx = 0; idx * BITS_PER_LONG < sz; idx++) {
val = addr[idx];
if (val) {
sz = min(idx * BITS_PER_LONG + __ffs(val), sz);
break;
}
}

On register-memory architectures, like x86, compiler may decide to
access memory twice - first time to compare against 0, and second time
to fetch its value to pass it to __ffs().

When running find_first_bit() on volatile memory, the memory may get
changed in-between, and for instance, it may lead to passing 0 to
__ffs(), which is undefined. This is a potentially dangerous call.

find_and_clear_bit() as a wrapper around test_and_clear_bit()
naturally treats underlying bitmap as a volatile memory and prevents
compiler from such optimizations.

Now that KCSAN is catching exactly this type of situations and warns on
undercover memory modifications. We can use it to reveal improper usage
of find_bit(), and convert it to atomic find_and_*_bit() as appropriate.

The 1st patch of the series adds the following atomic primitives:

find_and_set_bit(addr, nbits);
find_and_set_next_bit(addr, nbits, start);
...

Here find_and_{set,clear} part refers to the corresponding
test_and_{set,clear}_bit function, and suffixes like _wrap or _lock
derive semantics from corresponding find() or test() functions.

For brevity, the naming omits the fact that we search for zero bit in
find_and_set, and correspondingly, search for set bit in find_and_clear
functions.

The patch also adds iterators with atomic semantics, like
for_each_test_and_set_bit(). Here, the naming rule is to simply prefix
corresponding atomic operation with 'for_each'.

This series is a result of discussion [1]. All find_bit() functions imply
exclusive access to the bitmaps. However, KCSAN reports quite a number
of warnings related to find_bit() API. Some of them are not pointing
to real bugs because in many situations people intentionally allow
concurrent bitmap operations.

If so, find_bit() can be annotated such that KCSAN will ignore it:

bit = data_race(find_first_bit(bitmap, nbits));

This series addresses the other important case where people really need
atomic find ops. As the following patches show, the resulting code
looks safer and more verbose comparing to opencoded loops followed by
atomic bit flips.

In [1] Mirsad reported 2% slowdown in a single-thread search test when
switching find_bit() function to treat bitmaps as volatile arrays. On
the other hand, kernel robot in the same thread reported +3.7% to the
performance of will-it-scale.per_thread_ops test.

Assuming that our compilers are sane and generate better code against
properly annotated data, the above discrepancy doesn't look weird. When
running on non-volatile bitmaps, plain find_bit() outperforms atomic
find_and_bit(), and vice-versa.

So, all users of find_bit() API, where heavy concurrency is expected,
are encouraged to switch to atomic find_and_bit() as appropriate.

1st patch of this series adds atomic find_and_bit() API, and all the
following patches spread it over the kernel. They can be applied
separately from each other on per-subsystems basis, or I can pull them
in bitmap tree, as appropriate.

[1] https://lore.kernel.org/lkml/[email protected]/T/#m3e7341eb3571753f3acf8fe166f3fb5b2c12e615

Yury Norov (34):
lib/find: add atomic find_bit() primitives
lib/sbitmap; make __sbitmap_get_word() using find_and_set_bit()
watch_queue: use atomic find_bit() in post_one_notification()
sched: add cpumask_find_and_set() and use it in __mm_cid_get()
mips: sgi-ip30: rework heart_alloc_int()
sparc: fix opencoded find_and_set_bit() in alloc_msi()
perf/arm: optimize opencoded atomic find_bit() API
drivers/perf: optimize ali_drw_get_counter_idx() by using find_bit()
dmaengine: idxd: optimize perfmon_assign_event()
ath10k: optimize ath10k_snoc_napi_poll() by using find_bit()
wifi: rtw88: optimize rtw_pci_tx_kick_off() by using find_bit()
wifi: intel: use atomic find_bit() API where appropriate
KVM: x86: hyper-v: optimize and cleanup kvm_hv_process_stimers()
PCI: hv: switch hv_get_dom_num() to use atomic find_bit()
scsi: use atomic find_bit() API where appropriate
powerpc: use atomic find_bit() API where appropriate
iommu: use atomic find_bit() API where appropriate
media: radio-shark: use atomic find_bit() API where appropriate
sfc: switch to using atomic find_bit() API where appropriate
tty: nozomi: optimize interrupt_handler()
usb: cdc-acm: optimize acm_softint()
block: null_blk: fix opencoded find_and_set_bit() in get_tag()
RDMA/rtrs: fix opencoded find_and_set_bit_lock() in
__rtrs_get_permit()
mISDN: optimize get_free_devid()
media: em28xx: cx231xx: fix opencoded find_and_set_bit()
ethernet: rocker: optimize ofdpa_port_internal_vlan_id_get()
serial: sc12is7xx: optimize sc16is7xx_alloc_line()
bluetooth: optimize cmtp_alloc_block_id()
net: smc: fix opencoded find_and_set_bit() in
smc_wr_tx_get_free_slot_index()
ALSA: use atomic find_bit() functions where applicable
drivers/perf: optimize m1_pmu_get_event_idx() by using find_bit() API
m68k: rework get_mmu_context()
microblaze: rework get_mmu_context()
sh: rework ilsel_enable()

arch/m68k/include/asm/mmu_context.h | 11 +-
arch/microblaze/include/asm/mmu_context_mm.h | 11 +-
arch/mips/sgi-ip30/ip30-irq.c | 12 +-
arch/powerpc/mm/book3s32/mmu_context.c | 10 +-
arch/powerpc/platforms/pasemi/dma_lib.c | 45 +--
arch/powerpc/platforms/powernv/pci-sriov.c | 12 +-
arch/sh/boards/mach-x3proto/ilsel.c | 4 +-
arch/sparc/kernel/pci_msi.c | 9 +-
arch/x86/kvm/hyperv.c | 39 ++-
drivers/block/null_blk/main.c | 41 +--
drivers/dma/idxd/perfmon.c | 8 +-
drivers/infiniband/ulp/rtrs/rtrs-clt.c | 15 +-
drivers/iommu/arm/arm-smmu/arm-smmu.h | 10 +-
drivers/iommu/msm_iommu.c | 18 +-
drivers/isdn/mISDN/core.c | 9 +-
drivers/media/radio/radio-shark.c | 5 +-
drivers/media/radio/radio-shark2.c | 5 +-
drivers/media/usb/cx231xx/cx231xx-cards.c | 16 +-
drivers/media/usb/em28xx/em28xx-cards.c | 37 +--
drivers/net/ethernet/rocker/rocker_ofdpa.c | 11 +-
drivers/net/ethernet/sfc/rx_common.c | 4 +-
drivers/net/ethernet/sfc/siena/rx_common.c | 4 +-
drivers/net/ethernet/sfc/siena/siena_sriov.c | 14 +-
drivers/net/wireless/ath/ath10k/snoc.c | 9 +-
.../net/wireless/intel/iwlegacy/4965-mac.c | 7 +-
drivers/net/wireless/intel/iwlegacy/common.c | 8 +-
drivers/net/wireless/intel/iwlwifi/dvm/sta.c | 8 +-
drivers/net/wireless/intel/iwlwifi/dvm/tx.c | 19 +-
drivers/net/wireless/realtek/rtw88/pci.c | 5 +-
drivers/net/wireless/realtek/rtw89/pci.c | 5 +-
drivers/pci/controller/pci-hyperv.c | 7 +-
drivers/perf/alibaba_uncore_drw_pmu.c | 10 +-
drivers/perf/apple_m1_cpu_pmu.c | 8 +-
drivers/perf/arm-cci.c | 23 +-
drivers/perf/arm-ccn.c | 10 +-
drivers/perf/arm_dmc620_pmu.c | 9 +-
drivers/perf/arm_pmuv3.c | 8 +-
drivers/scsi/mpi3mr/mpi3mr_os.c | 21 +-
drivers/scsi/qedi/qedi_main.c | 9 +-
drivers/scsi/scsi_lib.c | 5 +-
drivers/tty/nozomi.c | 5 +-
drivers/tty/serial/sc16is7xx.c | 8 +-
drivers/usb/class/cdc-acm.c | 5 +-
include/linux/cpumask.h | 12 +
include/linux/find.h | 289 ++++++++++++++++++
kernel/sched/sched.h | 52 +---
kernel/watch_queue.c | 6 +-
lib/find_bit.c | 85 ++++++
lib/sbitmap.c | 46 +--
net/bluetooth/cmtp/core.c | 10 +-
net/smc/smc_wr.c | 10 +-
sound/pci/hda/hda_codec.c | 7 +-
sound/usb/caiaq/audio.c | 13 +-
53 files changed, 588 insertions(+), 481 deletions(-)

--
2.39.2



2023-11-18 15:52:06

by Yury Norov

[permalink] [raw]
Subject: [PATCH 01/34] lib/find: add atomic find_bit() primitives

Add helpers around test_and_{set,clear}_bit() that allow to search for
clear or set bits and flip them atomically.

The target patterns may look like this:

for (idx = 0; idx < nbits; idx++)
if (test_and_clear_bit(idx, bitmap))
do_something(idx);

Or like this:

do {
bit = find_first_bit(bitmap, nbits);
if (bit >= nbits)
return nbits;
} while (!test_and_clear_bit(bit, bitmap));
return bit;

In both cases, the opencoded loop may be converted to a single function
or iterator call. Correspondingly:

for_each_test_and_clear_bit(idx, bitmap, nbits)
do_something(idx);

Or:
return find_and_clear_bit(bitmap, nbits);

Obviously, the less routine code people have write themself, the less
probability to make a mistake.

Those are not only handy helpers but also resolve a non-trivial
issue of using non-atomic find_bit() together with atomic
test_and_{set,clear)_bit().

The trick is that find_bit() implies that the bitmap is a regular
non-volatile piece of memory, and compiler is allowed to use such
optimization techniques like re-fetching memory instead of caching it.

For example, find_first_bit() is implemented like this:

for (idx = 0; idx * BITS_PER_LONG < sz; idx++) {
val = addr[idx];
if (val) {
sz = min(idx * BITS_PER_LONG + __ffs(val), sz);
break;
}
}

On register-memory architectures, like x86, compiler may decide to
access memory twice - first time to compare against 0, and second time
to fetch its value to pass it to __ffs().

When running find_first_bit() on volatile memory, the memory may get
changed in-between, and for instance, it may lead to passing 0 to
__ffs(), which is undefined. This is a potentially dangerous call.

find_and_clear_bit() as a wrapper around test_and_clear_bit()
naturally treats underlying bitmap as a volatile memory and prevents
compiler from such optimizations.

Now that KCSAN is catching exactly this type of situations and warns on
undercover memory modifications. We can use it to reveal improper usage
of find_bit(), and convert it to atomic find_and_*_bit() as appropriate.

The 1st patch of the series adds the following atomic primitives:

find_and_set_bit(addr, nbits);
find_and_set_next_bit(addr, nbits, start);
...

Here find_and_{set,clear} part refers to the corresponding
test_and_{set,clear}_bit function, and suffixes like _wrap or _lock
derive semantics from corresponding find() or test() functions.

For brevity, the naming omits the fact that we search for zero bit in
find_and_set, and correspondingly, search for set bit in find_and_clear
functions.

The patch also adds iterators with atomic semantics, like
for_each_test_and_set_bit(). Here, the naming rule is to simply prefix
corresponding atomic operation with 'for_each'.

All users of find_bit() API, where heavy concurrency is expected,
are encouraged to switch to atomic find_and_bit() as appropriate.

Signed-off-by: Yury Norov <[email protected]>
---
include/linux/find.h | 289 +++++++++++++++++++++++++++++++++++++++++++
lib/find_bit.c | 85 +++++++++++++
2 files changed, 374 insertions(+)

diff --git a/include/linux/find.h b/include/linux/find.h
index 5e4f39ef2e72..e8567f336f42 100644
--- a/include/linux/find.h
+++ b/include/linux/find.h
@@ -32,6 +32,16 @@ extern unsigned long _find_first_and_bit(const unsigned long *addr1,
extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size);
extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size);

+unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits);
+unsigned long _find_and_set_next_bit(volatile unsigned long *addr, unsigned long nbits,
+ unsigned long start);
+unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits);
+unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr, unsigned long nbits,
+ unsigned long start);
+unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits);
+unsigned long _find_and_clear_next_bit(volatile unsigned long *addr, unsigned long nbits,
+ unsigned long start);
+
#ifdef __BIG_ENDIAN
unsigned long _find_first_zero_bit_le(const unsigned long *addr, unsigned long size);
unsigned long _find_next_zero_bit_le(const unsigned long *addr, unsigned
@@ -460,6 +470,267 @@ unsigned long __for_each_wrap(const unsigned long *bitmap, unsigned long size,
return bit < start ? bit : size;
}

+/**
+ * find_and_set_bit - Find a zero bit and set it atomically
+ * @addr: The address to base the search on
+ * @nbits: The bitmap size in bits
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the bit found is the 1st bit in the bitmap. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [0 .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, 0);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_bit(addr, nbits);
+}
+
+
+/**
+ * find_and_set_next_bit - Find a zero bit and set it, starting from @offset
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the bit found is the 1st bit in the bitmap, starting from @offset.
+ * It's also not guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [@offset .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, offset);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_next_bit(addr, nbits, offset);
+}
+
+/**
+ * find_and_set_bit_wrap - find and set bit starting at @offset, wrapping around zero
+ * @addr: The first address to base the search on
+ * @nbits: The bitmap size in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * Returns: the bit number for the next clear bit, or first clear bit up to @offset,
+ * while atomically setting it. If no bits are found, returns @nbits.
+ */
+static inline
+unsigned long find_and_set_bit_wrap(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ unsigned long bit = find_and_set_next_bit(addr, nbits, offset);
+
+ if (bit < nbits || offset == 0)
+ return bit;
+
+ bit = find_and_set_bit(addr, offset);
+ return bit < offset ? bit : nbits;
+}
+
+/**
+ * find_and_set_bit_lock - find a zero bit, then set it atomically with lock
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the bit found is the 1st bit in the bitmap. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [0 .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, 0);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit_lock(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_bit_lock(addr, nbits);
+}
+
+/**
+ * find_and_set_next_bit_lock - find a zero bit and set it atomically with lock
+ * @addr: The address to base the search on
+ * @nbits: The bitmap size in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the bit found is the 1st bit in the range. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [@offset .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_next_bit_lock(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, offset);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit_lock(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_next_bit_lock(addr, nbits, offset);
+}
+
+/**
+ * find_and_set_bit_wrap_lock - find zero bit starting at @ofset and set it
+ * with lock, and wrap around zero if nothing found
+ * @addr: The first address to base the search on
+ * @nbits: The bitmap size in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * Returns: the bit number for the next set bit, or first set bit up to @offset
+ * If no bits are set, returns @nbits.
+ */
+static inline
+unsigned long find_and_set_bit_wrap_lock(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ unsigned long bit = find_and_set_next_bit_lock(addr, nbits, offset);
+
+ if (bit < nbits || offset == 0)
+ return bit;
+
+ bit = find_and_set_bit_lock(addr, offset);
+ return bit < offset ? bit : nbits;
+}
+
+/**
+ * find_and_clear_bit - Find a set bit and clear it atomically
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the found bit is the 1st bit in the bitmap. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [0 .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and cleared bit, or @nbits if no bits found
+ */
+static inline unsigned long find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr & GENMASK(nbits - 1, 0);
+ if (val == 0)
+ return nbits;
+ ret = __ffs(val);
+ } while (!test_and_clear_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_clear_bit(addr, nbits);
+}
+
+/**
+ * find_and_clear_next_bit - Find a set bit next after @offset, and clear it atomically
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ * @offset: bit offset at which to start searching
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the bit found is the 1st bit in the range It's also not
+ * guaranteed that if @nbits is returned, there's no set bits after @offset.
+ *
+ * The function does guarantee that if returned value is in range [@offset .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and cleared bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_clear_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr & GENMASK(nbits - 1, offset);
+ if (val == 0)
+ return nbits;
+ ret = __ffs(val);
+ } while (!test_and_clear_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_clear_next_bit(addr, nbits, offset);
+}
+
/**
* find_next_clump8 - find next 8-bit clump with set bits in a memory region
* @clump: location to store copy of found clump
@@ -577,6 +848,24 @@ unsigned long find_next_bit_le(const void *addr, unsigned
#define for_each_set_bit_from(bit, addr, size) \
for (; (bit) = find_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++)

+/* same as for_each_set_bit() but atomically clears each found bit */
+#define for_each_test_and_clear_bit(bit, addr, size) \
+ for ((bit) = 0; \
+ (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); \
+ (bit)++)
+
+/* same as for_each_clear_bit() but atomically sets each found bit */
+#define for_each_test_and_set_bit(bit, addr, size) \
+ for ((bit) = 0; \
+ (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); \
+ (bit)++)
+
+/* same as for_each_clear_bit_from() but atomically clears each found bit */
+#define for_each_test_and_set_bit_from(bit, addr, size) \
+ for (; \
+ (bit) = find_and_set_next_bit((addr), (size), (bit)), (bit) < (size); \
+ (bit)++)
+
#define for_each_clear_bit(bit, addr, size) \
for ((bit) = 0; \
(bit) = find_next_zero_bit((addr), (size), (bit)), (bit) < (size); \
diff --git a/lib/find_bit.c b/lib/find_bit.c
index 32f99e9a670e..c9b6b9f96610 100644
--- a/lib/find_bit.c
+++ b/lib/find_bit.c
@@ -116,6 +116,91 @@ unsigned long _find_first_and_bit(const unsigned long *addr1,
EXPORT_SYMBOL(_find_first_and_bit);
#endif

+unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_bit);
+
+unsigned long _find_and_set_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long start)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_next_bit);
+
+unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit_lock(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_bit_lock);
+
+unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long start)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit_lock(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_next_bit_lock);
+
+unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_FIRST_BIT(addr[idx], /* nop */, nbits);
+ if (bit >= nbits)
+ return nbits;
+ } while (!test_and_clear_bit(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_clear_bit);
+
+unsigned long _find_and_clear_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long start)
+{
+ do {
+ start = FIND_NEXT_BIT(addr[idx], /* nop */, nbits, start);
+ if (start >= nbits)
+ return nbits;
+ } while (!test_and_clear_bit(start, addr));
+
+ return start;
+}
+EXPORT_SYMBOL(_find_and_clear_next_bit);
+
#ifndef find_first_zero_bit
/*
* Find the first cleared bit in a memory region.
--
2.39.2


2023-11-18 15:52:58

by Yury Norov

[permalink] [raw]
Subject: [PATCH 28/34] bluetooth: optimize cmtp_alloc_block_id()

Instead of polling every bit in blockids, switch it to using a
dedicated find_and_set_bit(), and make the function a simple one-liner.

Signed-off-by: Yury Norov <[email protected]>
---
net/bluetooth/cmtp/core.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c
index 90d130588a3e..b1330acbbff3 100644
--- a/net/bluetooth/cmtp/core.c
+++ b/net/bluetooth/cmtp/core.c
@@ -88,15 +88,9 @@ static void __cmtp_copy_session(struct cmtp_session *session, struct cmtp_connin

static inline int cmtp_alloc_block_id(struct cmtp_session *session)
{
- int i, id = -1;
+ int id = find_and_set_bit(&session->blockids, 16);

- for (i = 0; i < 16; i++)
- if (!test_and_set_bit(i, &session->blockids)) {
- id = i;
- break;
- }
-
- return id;
+ return id < 16 ? id : -1;
}

static inline void cmtp_free_block_id(struct cmtp_session *session, int id)
--
2.39.2


2023-11-18 16:34:33

by bluez.test.bot

[permalink] [raw]
Subject: RE: biops: add atomig find_bit() operations

This is automated email and please do not reply to this email!

Dear submitter,

Thank you for submitting the patches to the linux bluetooth mailing list.
This is a CI test results with your patch series:
PW Link:https://patchwork.kernel.org/project/bluetooth/list/?series=802169

---Test result---

Test Summary:
CheckPatch FAIL 2.01 seconds
GitLint FAIL 0.82 seconds
SubjectPrefix FAIL 0.48 seconds
BuildKernel PASS 27.26 seconds
CheckAllWarning PASS 30.11 seconds
CheckSparse PASS 35.59 seconds
CheckSmatch PASS 98.12 seconds
BuildKernel32 PASS 26.55 seconds
TestRunnerSetup PASS 413.64 seconds
TestRunner_l2cap-tester PASS 22.93 seconds
TestRunner_iso-tester PASS 40.07 seconds
TestRunner_bnep-tester PASS 6.98 seconds
TestRunner_mgmt-tester PASS 160.00 seconds
TestRunner_rfcomm-tester PASS 10.93 seconds
TestRunner_sco-tester PASS 14.47 seconds
TestRunner_ioctl-tester PASS 12.11 seconds
TestRunner_mesh-tester PASS 8.92 seconds
TestRunner_smp-tester PASS 9.74 seconds
TestRunner_userchan-tester PASS 7.23 seconds
IncrementalBuild PASS 29.81 seconds

Details
##############################
Test: CheckPatch - FAIL
Desc: Run checkpatch.pl script
Output:
[01/34] lib/find: add atomic find_bit() primitives
WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#266: FILE: include/linux/find.h:35:
+unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits);

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#267: FILE: include/linux/find.h:36:
+unsigned long _find_and_set_next_bit(volatile unsigned long *addr, unsigned long nbits,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#269: FILE: include/linux/find.h:38:
+unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits);

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#270: FILE: include/linux/find.h:39:
+unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr, unsigned long nbits,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#272: FILE: include/linux/find.h:41:
+unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits);

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#273: FILE: include/linux/find.h:42:
+unsigned long _find_and_clear_next_bit(volatile unsigned long *addr, unsigned long nbits,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#300: FILE: include/linux/find.h:490:
+unsigned long find_and_set_bit(volatile unsigned long *addr, unsigned long nbits)

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#337: FILE: include/linux/find.h:527:
+unsigned long find_and_set_next_bit(volatile unsigned long *addr,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#366: FILE: include/linux/find.h:556:
+unsigned long find_and_set_bit_wrap(volatile unsigned long *addr,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#395: FILE: include/linux/find.h:585:
+unsigned long find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits)

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#431: FILE: include/linux/find.h:621:
+unsigned long find_and_set_next_bit_lock(volatile unsigned long *addr,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#461: FILE: include/linux/find.h:651:
+unsigned long find_and_set_bit_wrap_lock(volatile unsigned long *addr,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#489: FILE: include/linux/find.h:679:
+static inline unsigned long find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits)

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#525: FILE: include/linux/find.h:715:
+unsigned long find_and_clear_next_bit(volatile unsigned long *addr,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#580: FILE: lib/find_bit.c:119:
+unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits)

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#594: FILE: lib/find_bit.c:133:
+unsigned long _find_and_set_next_bit(volatile unsigned long *addr,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#609: FILE: lib/find_bit.c:148:
+unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits)

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#623: FILE: lib/find_bit.c:162:
+unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr,

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#638: FILE: lib/find_bit.c:177:
+unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits)

WARNING: Use of volatile is usually wrong: see Documentation/process/volatile-considered-harmful.rst
#652: FILE: lib/find_bit.c:191:
+unsigned long _find_and_clear_next_bit(volatile unsigned long *addr,

total: 0 errors, 20 warnings, 398 lines checked

NOTE: For some of the reported defects, checkpatch may be able to
mechanically convert to the typical style using --fix or --fix-inplace.

/github/workspace/src/src/13460133.patch has style problems, please review.

NOTE: Ignored message types: UNKNOWN_COMMIT_ID

NOTE: If any of the errors are false positives, please report
them to the maintainer, see CHECKPATCH in MAINTAINERS.


##############################
Test: GitLint - FAIL
Desc: Run gitlint
Output:
[01/34] lib/find: add atomic find_bit() primitives

WARNING: I3 - ignore-body-lines: gitlint will be switching from using Python regex 'match' (match beginning) to 'search' (match anywhere) semantics. Please review your ignore-body-lines.regex option accordingly. To remove this warning, set general.regex-style-search=True. More details: https://jorisroovers.github.io/gitlint/configuration/#regex-style-search
8: B3 Line contains hard tab characters (\t): " for (idx = 0; idx < nbits; idx++)"
9: B3 Line contains hard tab characters (\t): " if (test_and_clear_bit(idx, bitmap))"
10: B3 Line contains hard tab characters (\t): " do_something(idx);"
14: B3 Line contains hard tab characters (\t): " do {"
15: B3 Line contains hard tab characters (\t): " bit = find_first_bit(bitmap, nbits);"
16: B3 Line contains hard tab characters (\t): " if (bit >= nbits)"
17: B3 Line contains hard tab characters (\t): " return nbits;"
18: B3 Line contains hard tab characters (\t): " } while (!test_and_clear_bit(bit, bitmap));"
19: B3 Line contains hard tab characters (\t): " return bit;"
24: B3 Line contains hard tab characters (\t): " for_each_test_and_clear_bit(idx, bitmap, nbits)"
25: B3 Line contains hard tab characters (\t): " do_something(idx);"
28: B3 Line contains hard tab characters (\t): " return find_and_clear_bit(bitmap, nbits);"
69: B3 Line contains hard tab characters (\t): " find_and_set_bit(addr, nbits);"
70: B3 Line contains hard tab characters (\t): " find_and_set_next_bit(addr, nbits, start);"
71: B3 Line contains hard tab characters (\t): " ..."
##############################
Test: SubjectPrefix - FAIL
Desc: Check subject contains "Bluetooth" prefix
Output:
"Bluetooth: " prefix is not specified in the subject
"Bluetooth: " prefix is not specified in the subject


---
Regards,
Linux Bluetooth