2023-10-25 08:38:41

by Alexander Potapenko

[permalink] [raw]
Subject: [PATCH v9 1/2] lib/bitmap: add bitmap_{read,write}()

From: Syed Nayyar Waris <[email protected]>

The two new functions allow reading/writing values of length up to
BITS_PER_LONG bits at arbitrary position in the bitmap.

The code was taken from "bitops: Introduce the for_each_set_clump macro"
by Syed Nayyar Waris with a number of changes and simplifications:
- instead of using roundup(), which adds an unnecessary dependency
on <linux/math.h>, we calculate space as BITS_PER_LONG-offset;
- indentation is reduced by not using else-clauses (suggested by
checkpatch for bitmap_get_value());
- bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read()
and bitmap_write();
- some redundant computations are omitted.

Cc: Arnd Bergmann <[email protected]>
Signed-off-by: Syed Nayyar Waris <[email protected]>
Signed-off-by: William Breathitt Gray <[email protected]>
Link: https://lore.kernel.org/lkml/fe12eedf3666f4af5138de0e70b67a07c7f40338.1592224129.git.syednwaris@gmail.com/
Suggested-by: Yury Norov <[email protected]>
Co-developed-by: Alexander Potapenko <[email protected]>
Signed-off-by: Alexander Potapenko <[email protected]>

---
This patch was previously part of the "Implement MTE tag compression for
swapped pages" series
(https://lore.kernel.org/linux-arm-kernel/[email protected]/T/)

This patch was previously called "lib/bitmap: add
bitmap_{set,get}_value()"
(https://lore.kernel.org/lkml/[email protected]/)

v8:
- as suggested by Andy Shevchenko, handle reads/writes of more than
BITS_PER_LONG bits, add a note for 32-bit systems

v7:
- Address comments by Yury Norov, Andy Shevchenko, Rasmus Villemoes:
- update code comments;
- get rid of GENMASK();
- s/assign_bit/__assign_bit;
- more vertical whitespace for better readability;
- more compact code for bitmap_write() (now for real)

v6:
- As suggested by Yury Norov, do not require bitmap_read(..., 0) to
return 0.

v5:
- Address comments by Yury Norov:
- updated code comments and patch title/description
- replace GENMASK(nbits - 1, 0) with BITMAP_LAST_WORD_MASK(nbits)
- more compact bitmap_write() implementation

v4:
- Address comments by Andy Shevchenko and Yury Norov:
- prevent passing values >= 64 to GENMASK()
- fix commit authorship
- change comments
- check for unlikely(nbits==0)
- drop unnecessary const declarations
- fix kernel-doc comments
- rename bitmap_{get,set}_value() to bitmap_{read,write}()
---
include/linux/bitmap.h | 81 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 81 insertions(+)

diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
index 03644237e1efb..d63d5e4b4ab54 100644
--- a/include/linux/bitmap.h
+++ b/include/linux/bitmap.h
@@ -77,6 +77,10 @@ struct device;
* bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst
* bitmap_get_value8(map, start) Get 8bit value from map at start
* bitmap_set_value8(map, value, start) Set 8bit value to map at start
+ * bitmap_read(map, start, nbits) Read an nbits-sized value from
+ * map at start
+ * bitmap_write(map, value, start, nbits) Write an nbits-sized value to
+ * map at start
*
* Note, bitmap_zero() and bitmap_fill() operate over the region of
* unsigned longs, that is, bits behind bitmap till the unsigned long
@@ -599,6 +603,83 @@ static inline void bitmap_set_value8(unsigned long *map, unsigned long value,
map[index] |= value << offset;
}

+/**
+ * bitmap_read - read a value of n-bits from the memory region
+ * @map: address to the bitmap memory region
+ * @start: bit offset of the n-bit value
+ * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG
+ *
+ * Returns: value of nbits located at the @start bit offset within the @map
+ * memory region.
+ *
+ * Note: callers on 32-bit systems must be careful to not attempt reading more
+ * than sizeof(unsigned long).
+ */
+static inline unsigned long bitmap_read(const unsigned long *map,
+ unsigned long start,
+ unsigned long nbits)
+{
+ size_t index = BIT_WORD(start);
+ unsigned long offset = start % BITS_PER_LONG;
+ unsigned long space = BITS_PER_LONG - offset;
+ unsigned long value_low, value_high;
+
+ if (unlikely(!nbits || nbits > BITS_PER_LONG))
+ return 0;
+
+ if (space >= nbits)
+ return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits);
+
+ value_low = map[index] & BITMAP_FIRST_WORD_MASK(start);
+ value_high = map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits);
+ return (value_low >> offset) | (value_high << space);
+}
+
+/**
+ * bitmap_write - write n-bit value within a memory region
+ * @map: address to the bitmap memory region
+ * @value: value to write, clamped to nbits
+ * @start: bit offset of the n-bit value
+ * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG.
+ *
+ * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bit(),
+ * i.e. bits beyond @nbits are ignored:
+ *
+ * for (bit = 0; bit < nbits; bit++)
+ * __assign_bit(start + bit, bitmap, val & BIT(bit));
+ *
+ * Note: callers on 32-bit systems must be careful to not attempt writing more
+ * than sizeof(unsigned long).
+ */
+static inline void bitmap_write(unsigned long *map,
+ unsigned long value,
+ unsigned long start, unsigned long nbits)
+{
+ size_t index;
+ unsigned long offset;
+ unsigned long space;
+ unsigned long mask;
+ bool fit;
+
+ if (unlikely(!nbits || nbits > BITS_PER_LONG))
+ return;
+
+ mask = BITMAP_LAST_WORD_MASK(nbits);
+ value &= mask;
+ offset = start % BITS_PER_LONG;
+ space = BITS_PER_LONG - offset;
+ fit = space >= nbits;
+ index = BIT_WORD(start);
+
+ map[index] &= (fit ? (~(mask << offset)) : ~BITMAP_FIRST_WORD_MASK(start));
+ map[index] |= value << offset;
+ if (fit)
+ return;
+
+ map[index + 1] &= BITMAP_FIRST_WORD_MASK(start + nbits);
+ map[index + 1] |= (value >> space);
+}
+
#endif /* __ASSEMBLY__ */

#endif /* __LINUX_BITMAP_H */
--
2.42.0.758.gaed0368e0e-goog


2023-10-25 08:38:49

by Alexander Potapenko

[permalink] [raw]
Subject: [PATCH v9 2/2] lib/test_bitmap: add tests for bitmap_{read,write}()

Add basic tests ensuring that values can be added at arbitrary positions
of the bitmap, including those spanning into the adjacent unsigned
longs.

Two new performance tests, test_bitmap_read_perf() and
test_bitmap_write_perf(), can be used to assess future performance
improvements of bitmap_read() and bitmap_write():

[ 0.431119][ T1] test_bitmap: Time spent in test_bitmap_read_perf: 615253
[ 0.433197][ T1] test_bitmap: Time spent in test_bitmap_write_perf: 916313

(numbers from a Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz machine running
QEMU).

Signed-off-by: Alexander Potapenko <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>

---
This patch was previously part of the "Implement MTE tag compression for
swapped pages" series
(https://lore.kernel.org/linux-arm-kernel/[email protected]/T/)

This patch was previously called
"lib/test_bitmap: add tests for bitmap_{set,get}_value()"
(https://lore.kernel.org/lkml/[email protected]/)
and
"lib/test_bitmap: add tests for bitmap_{set,get}_value_unaligned"
(https://lore.kernel.org/lkml/[email protected]/)

v9:
- use WRITE_ONCE() to prevent optimizations in test_bitmap_read_perf()
- update patch description

v8:
- as requested by Andy Shevchenko, add tests for reading/writing
sizes > BITS_PER_LONG

v7:
- as requested by Yury Norov, add performance tests for bitmap_read()
and bitmap_write()

v6:
- use bitmap API to initialize test bitmaps
- as requested by Yury Norov, do not check the return value of
bitmap_read(..., 0)
- fix a compiler warning on 32-bit systems

v5:
- update patch title
- address Yury Norov's comments:
- rename the test cases
- factor out test_bitmap_write_helper() to test writing over
different background patterns;
- add a test case copying a nontrivial value bit-by-bit;
- drop volatile

v4:
- Address comments by Andy Shevchenko: added Reviewed-by: and a link to
the previous discussion
- Address comments by Yury Norov:
- expand the bitmap to catch more corner cases
- add code testing that bitmap_set_value() does not touch adjacent
bits
- add code testing the nbits==0 case
- rename bitmap_{get,set}_value() to bitmap_{read,write}()

v3:
- switch to using bitmap_{set,get}_value()
- change the expected bit pattern in test_set_get_value(),
as the test was incorrectly assuming 0 is the LSB.
---
lib/test_bitmap.c | 177 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 177 insertions(+)

diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
index f2ea9f30c7c5d..a4195c7376840 100644
--- a/lib/test_bitmap.c
+++ b/lib/test_bitmap.c
@@ -71,6 +71,17 @@ __check_eq_uint(const char *srcfile, unsigned int line,
return true;
}

+static bool __init
+__check_eq_ulong(const char *srcfile, unsigned int line,
+ const unsigned long exp_ulong, unsigned long x)
+{
+ if (exp_ulong != x) {
+ pr_err("[%s:%u] expected %lu, got %lu\n",
+ srcfile, line, exp_ulong, x);
+ return false;
+ }
+ return true;
+}

static bool __init
__check_eq_bitmap(const char *srcfile, unsigned int line,
@@ -186,6 +197,7 @@ __check_eq_str(const char *srcfile, unsigned int line,
})

#define expect_eq_uint(...) __expect_eq(uint, ##__VA_ARGS__)
+#define expect_eq_ulong(...) __expect_eq(ulong, ##__VA_ARGS__)
#define expect_eq_bitmap(...) __expect_eq(bitmap, ##__VA_ARGS__)
#define expect_eq_pbl(...) __expect_eq(pbl, ##__VA_ARGS__)
#define expect_eq_u32_array(...) __expect_eq(u32_array, ##__VA_ARGS__)
@@ -1222,6 +1234,168 @@ static void __init test_bitmap_const_eval(void)
BUILD_BUG_ON(~var != ~BIT(25));
}

+/*
+ * Test bitmap should be big enough to include the cases when start is not in
+ * the first word, and start+nbits lands in the following word.
+ */
+#define TEST_BIT_LEN (1000)
+
+/*
+ * Helper function to test bitmap_write() overwriting the chosen byte pattern.
+ */
+static void __init test_bitmap_write_helper(const char *pattern)
+{
+ DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+ DECLARE_BITMAP(exp_bitmap, TEST_BIT_LEN);
+ DECLARE_BITMAP(pat_bitmap, TEST_BIT_LEN);
+ unsigned long w, r, bit;
+ int i, n, nbits;
+
+ /*
+ * Only parse the pattern once and store the result in the intermediate
+ * bitmap.
+ */
+ bitmap_parselist(pattern, pat_bitmap, TEST_BIT_LEN);
+
+ /*
+ * Check that writing a single bit does not accidentally touch the
+ * adjacent bits.
+ */
+ for (i = 0; i < TEST_BIT_LEN; i++) {
+ bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
+ bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
+ for (bit = 0; bit <= 1; bit++) {
+ bitmap_write(bitmap, bit, i, 1);
+ __assign_bit(i, exp_bitmap, bit);
+ expect_eq_bitmap(exp_bitmap, bitmap,
+ TEST_BIT_LEN);
+ }
+ }
+
+ /* Ensure writing 0 bits does not change anything. */
+ bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
+ bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
+ for (i = 0; i < TEST_BIT_LEN; i++) {
+ bitmap_write(bitmap, ~0UL, i, 0);
+ expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN);
+ }
+
+ for (nbits = BITS_PER_LONG; nbits >= 1; nbits--) {
+ w = IS_ENABLED(CONFIG_64BIT) ? 0xdeadbeefdeadbeefUL
+ : 0xdeadbeefUL;
+ w >>= (BITS_PER_LONG - nbits);
+ for (i = 0; i <= TEST_BIT_LEN - nbits; i++) {
+ bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN);
+ bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN);
+ for (n = 0; n < nbits; n++)
+ __assign_bit(i + n, exp_bitmap, w & BIT(n));
+ bitmap_write(bitmap, w, i, nbits);
+ expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN);
+ r = bitmap_read(bitmap, i, nbits);
+ expect_eq_ulong(r, w);
+ }
+ }
+}
+
+static void __init test_bitmap_read_write(void)
+{
+ unsigned char *pattern[3] = {"", "all:1/2", "all"};
+ DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+ unsigned long zero_bits = 0, bits_per_long = BITS_PER_LONG;
+ unsigned long val;
+ int i, pi;
+
+ /*
+ * Reading/writing zero bits should not crash the kernel.
+ * READ_ONCE() prevents constant folding.
+ */
+ bitmap_write(NULL, 0, 0, READ_ONCE(zero_bits));
+ /* Return value of bitmap_read() is undefined here. */
+ bitmap_read(NULL, 0, READ_ONCE(zero_bits));
+
+ /*
+ * Reading/writing more than BITS_PER_LONG bits should not crash the
+ * kernel. READ_ONCE() prevents constant folding.
+ */
+ bitmap_write(NULL, 0, 0, READ_ONCE(bits_per_long) + 1);
+ /* Return value of bitmap_read() is undefined here. */
+ bitmap_read(NULL, 0, READ_ONCE(bits_per_long) + 1);
+
+ /*
+ * Ensure that bitmap_read() reads the same value that was previously
+ * written, and two consequent values are correctly merged.
+ * The resulting bit pattern is asymmetric to rule out possible issues
+ * with bit numeration order.
+ */
+ for (i = 0; i < TEST_BIT_LEN - 7; i++) {
+ bitmap_zero(bitmap, TEST_BIT_LEN);
+
+ bitmap_write(bitmap, 0b10101UL, i, 5);
+ val = bitmap_read(bitmap, i, 5);
+ expect_eq_ulong(0b10101UL, val);
+
+ bitmap_write(bitmap, 0b101UL, i + 5, 3);
+ val = bitmap_read(bitmap, i + 5, 3);
+ expect_eq_ulong(0b101UL, val);
+
+ val = bitmap_read(bitmap, i, 8);
+ expect_eq_ulong(0b10110101UL, val);
+ }
+
+ for (pi = 0; pi < ARRAY_SIZE(pattern); pi++)
+ test_bitmap_write_helper(pattern[pi]);
+}
+
+static void __init test_bitmap_read_perf(void)
+{
+ DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+ unsigned int cnt, nbits, i;
+ unsigned long val;
+ ktime_t time;
+
+ bitmap_fill(bitmap, TEST_BIT_LEN);
+ time = ktime_get();
+ for (cnt = 0; cnt < 5; cnt++) {
+ for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) {
+ for (i = 0; i < TEST_BIT_LEN; i++) {
+ if (i + nbits > TEST_BIT_LEN)
+ break;
+ /*
+ * Prevent the compiler from optimizing away the
+ * bitmap_read() by using its value.
+ */
+ WRITE_ONCE(val, bitmap_read(bitmap, i, nbits));
+ }
+ }
+ }
+ time = ktime_get() - time;
+ pr_err("Time spent in %s:\t%llu\n", __func__, time);
+}
+
+static void __init test_bitmap_write_perf(void)
+{
+ DECLARE_BITMAP(bitmap, TEST_BIT_LEN);
+ unsigned int cnt, nbits, i;
+ unsigned long val = 0xfeedface;
+ ktime_t time;
+
+ bitmap_zero(bitmap, TEST_BIT_LEN);
+ time = ktime_get();
+ for (cnt = 0; cnt < 5; cnt++) {
+ for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) {
+ for (i = 0; i < TEST_BIT_LEN; i++) {
+ if (i + nbits > TEST_BIT_LEN)
+ break;
+ bitmap_write(bitmap, val, i, nbits);
+ }
+ }
+ }
+ time = ktime_get() - time;
+ pr_err("Time spent in %s:\t%llu\n", __func__, time);
+}
+
+#undef TEST_BIT_LEN
+
static void __init selftest(void)
{
test_zero_clear();
@@ -1237,6 +1411,9 @@ static void __init selftest(void)
test_bitmap_cut();
test_bitmap_print_buf();
test_bitmap_const_eval();
+ test_bitmap_read_write();
+ test_bitmap_read_perf();
+ test_bitmap_write_perf();

test_find_nth_bit();
test_for_each_set_bit();
--
2.42.0.758.gaed0368e0e-goog

2023-10-26 12:16:22

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v9 1/2] lib/bitmap: add bitmap_{read,write}()

On Wed, Oct 25, 2023 at 10:38:11AM +0200, Alexander Potapenko wrote:
> From: Syed Nayyar Waris <[email protected]>
>
> The two new functions allow reading/writing values of length up to
> BITS_PER_LONG bits at arbitrary position in the bitmap.
>
> The code was taken from "bitops: Introduce the for_each_set_clump macro"
> by Syed Nayyar Waris with a number of changes and simplifications:
> - instead of using roundup(), which adds an unnecessary dependency
> on <linux/math.h>, we calculate space as BITS_PER_LONG-offset;
> - indentation is reduced by not using else-clauses (suggested by
> checkpatch for bitmap_get_value());
> - bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read()
> and bitmap_write();
> - some redundant computations are omitted.

...

> * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst
> * bitmap_get_value8(map, start) Get 8bit value from map at start
> * bitmap_set_value8(map, value, start) Set 8bit value to map at start
> + * bitmap_read(map, start, nbits) Read an nbits-sized value from
> + * map at start
> + * bitmap_write(map, value, start, nbits) Write an nbits-sized value to
> + * map at start

I still didn't get the grouping you implied with this...

> * Note, bitmap_zero() and bitmap_fill() operate over the region of
> * unsigned longs, that is, bits behind bitmap till the unsigned long

...

> +/**
> + * bitmap_read - read a value of n-bits from the memory region
> + * @map: address to the bitmap memory region
> + * @start: bit offset of the n-bit value
> + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG
> + *
> + * Returns: value of nbits located at the @start bit offset within the @map
> + * memory region.
> + *
> + * Note: callers on 32-bit systems must be careful to not attempt reading more
> + * than sizeof(unsigned long).

sizeof() here is misleading, We talk about bits, BITS_PER_LONG (which is 32),
here it's better to be explicit that reading more than 32 bits at a time on
32-bit platform will return 0. Actually what you need is to describe...

> + */
> +static inline unsigned long bitmap_read(const unsigned long *map,
> + unsigned long start,
> + unsigned long nbits)
> +{
> + size_t index = BIT_WORD(start);
> + unsigned long offset = start % BITS_PER_LONG;
> + unsigned long space = BITS_PER_LONG - offset;
> + unsigned long value_low, value_high;
> +
> + if (unlikely(!nbits || nbits > BITS_PER_LONG))
> + return 0;

...this return in the Return section.

> +
> + if (space >= nbits)
> + return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits);
> +
> + value_low = map[index] & BITMAP_FIRST_WORD_MASK(start);
> + value_high = map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits);
> + return (value_low >> offset) | (value_high << space);
> +}

...

> +/**
> + * bitmap_write - write n-bit value within a memory region
> + * @map: address to the bitmap memory region
> + * @value: value to write, clamped to nbits
> + * @start: bit offset of the n-bit value
> + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG.
> + *
> + * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bit(),
> + * i.e. bits beyond @nbits are ignored:
> + *
> + * for (bit = 0; bit < nbits; bit++)
> + * __assign_bit(start + bit, bitmap, val & BIT(bit));

> + * Note: callers on 32-bit systems must be careful to not attempt writing more
> + * than sizeof(unsigned long).

Ditto.

> + */

--
With Best Regards,
Andy Shevchenko


2023-10-26 12:17:11

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v9 1/2] lib/bitmap: add bitmap_{read,write}()

On Thu, Oct 26, 2023 at 03:15:40PM +0300, Andy Shevchenko wrote:
> On Wed, Oct 25, 2023 at 10:38:11AM +0200, Alexander Potapenko wrote:

...

> > * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst
> > * bitmap_get_value8(map, start) Get 8bit value from map at start
> > * bitmap_set_value8(map, value, start) Set 8bit value to map at start
> > + * bitmap_read(map, start, nbits) Read an nbits-sized value from
> > + * map at start
> > + * bitmap_write(map, value, start, nbits) Write an nbits-sized value to
> > + * map at start
>
> I still didn't get the grouping you implied with this...

Ah, sorry, I haven't noticed the descriptions are two-liners.

--
With Best Regards,
Andy Shevchenko


2023-10-26 13:29:17

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH v9 1/2] lib/bitmap: add bitmap_{read,write}()

> > * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst
> > * bitmap_get_value8(map, start) Get 8bit value from map at start
> > * bitmap_set_value8(map, value, start) Set 8bit value to map at start
> > + * bitmap_read(map, start, nbits) Read an nbits-sized value from
> > + * map at start
> > + * bitmap_write(map, value, start, nbits) Write an nbits-sized value to
> > + * map at start
>
> I still didn't get the grouping you implied with this...

Given your second reply, I can disregard this one, right?

>
> > * Note, bitmap_zero() and bitmap_fill() operate over the region of
> > * unsigned longs, that is, bits behind bitmap till the unsigned long
>
> ...
>
> > +/**
> > + * bitmap_read - read a value of n-bits from the memory region
> > + * @map: address to the bitmap memory region
> > + * @start: bit offset of the n-bit value
> > + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG
> > + *
> > + * Returns: value of nbits located at the @start bit offset within the @map
> > + * memory region.
> > + *
> > + * Note: callers on 32-bit systems must be careful to not attempt reading more
> > + * than sizeof(unsigned long).
>
> sizeof() here is misleading, We talk about bits, BITS_PER_LONG (which is 32),
> here it's better to be explicit that reading more than 32 bits at a time on
> 32-bit platform will return 0. Actually what you need is to describe...
>
> > + */
> > +static inline unsigned long bitmap_read(const unsigned long *map,
> > + unsigned long start,
> > + unsigned long nbits)
> > +{
> > + size_t index = BIT_WORD(start);
> > + unsigned long offset = start % BITS_PER_LONG;
> > + unsigned long space = BITS_PER_LONG - offset;
> > + unsigned long value_low, value_high;
> > +
> > + if (unlikely(!nbits || nbits > BITS_PER_LONG))
> > + return 0;
>
> ...this return in the Return section.
>

Parameter description for @nbits actually mentions BITS_PER_LONG
already, so would it be ok to drop the Note: and extend the Returns:
section as follows:

+ * Returns: value of nbits located at the @start bit offset within the @map
+ * memory region. For @nbits = 0 and @nbits > BITS_PER_LONG the return value
+ * is undefined.

?

(Yury didn't want the zero return value to be part of the contract here).

2023-10-26 13:34:47

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v9 1/2] lib/bitmap: add bitmap_{read,write}()

On Thu, Oct 26, 2023 at 03:28:25PM +0200, Alexander Potapenko wrote:

...

> > I still didn't get the grouping you implied with this...
>
> Given your second reply, I can disregard this one, right?

Yes.

...

> > > + if (unlikely(!nbits || nbits > BITS_PER_LONG))
> > > + return 0;
> >
> > ...this return in the Return section.
> >
>
> Parameter description for @nbits actually mentions BITS_PER_LONG
> already, so would it be ok to drop the Note: and extend the Returns:
> section as follows:
>
> + * Returns: value of nbits located at the @start bit offset within the @map
> + * memory region. For @nbits = 0 and @nbits > BITS_PER_LONG the return value
> + * is undefined.
>
> ?

Fine to me.

> (Yury didn't want the zero return value to be part of the contract here).


--
With Best Regards,
Andy Shevchenko