2022-09-08 15:21:11

by Uros Bizjak

[permalink] [raw]
Subject: [PATCH RESEND v2] sbitmap: Use atomic_long_try_cmpxchg in __sbitmap_queue_get_batch

Use atomic_long_try_cmpxchg instead of
atomic_long_cmpxchg (*ptr, old, new) == old in __sbitmap_queue_get_batch.
x86 CMPXCHG instruction returns success in ZF flag, so this change
saves a compare after cmpxchg (and related move instruction in front
of cmpxchg).

Also, atomic_long_cmpxchg implicitly assigns old *ptr value to "old"
when cmpxchg fails, enabling further code simplifications, e.g.
an extra memory read can be avoided in the loop.

No functional change intended.

Cc: Jens Axboe <[email protected]>
Signed-off-by: Uros Bizjak <[email protected]>
---
v2: Resend against linux-block for-6.1/block.
---
lib/sbitmap.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index a39b1a877366..299765a027bf 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -533,16 +533,16 @@ unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags,
nr = find_first_zero_bit(&map->word, map_depth);
if (nr + nr_tags <= map_depth) {
atomic_long_t *ptr = (atomic_long_t *) &map->word;
- unsigned long val, ret;
+ unsigned long val;

get_mask = ((1UL << nr_tags) - 1) << nr;
+ val = READ_ONCE(map->word);
do {
- val = READ_ONCE(map->word);
if ((val & ~get_mask) != val)
goto next;
- ret = atomic_long_cmpxchg(ptr, val, get_mask | val);
- } while (ret != val);
- get_mask = (get_mask & ~ret) >> nr;
+ } while (!atomic_long_try_cmpxchg(ptr, &val,
+ get_mask | val));
+ get_mask = (get_mask & ~val) >> nr;
if (get_mask) {
*offset = nr + (index << sb->shift);
update_alloc_hint_after_get(sb, depth, hint,
--
2.37.2


2022-09-08 16:12:34

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH RESEND v2] sbitmap: Use atomic_long_try_cmpxchg in __sbitmap_queue_get_batch

On Thu, 8 Sep 2022 17:12:00 +0200, Uros Bizjak wrote:
> Use atomic_long_try_cmpxchg instead of
> atomic_long_cmpxchg (*ptr, old, new) == old in __sbitmap_queue_get_batch.
> x86 CMPXCHG instruction returns success in ZF flag, so this change
> saves a compare after cmpxchg (and related move instruction in front
> of cmpxchg).
>
> Also, atomic_long_cmpxchg implicitly assigns old *ptr value to "old"
> when cmpxchg fails, enabling further code simplifications, e.g.
> an extra memory read can be avoided in the loop.
>
> [...]

Applied, thanks!

[1/1] sbitmap: Use atomic_long_try_cmpxchg in __sbitmap_queue_get_batch
commit: c35227d4e8cbc70a6622cc7cc5f8c3bff513f1fa

Best regards,
--
Jens Axboe