Commit 2d6261583be0 ("lib: rework bitmap_parse()") does not
take into account order of halfwords on 64-bit big endian
architectures. As result (at least) Receive Packet Steering,
IRQ affinity masks and runtime kernel test "test_bitmap" get
broken on s390.
Fixes: 2d6261583be0 ("lib: rework bitmap_parse()")
Cc: [email protected]
Cc: Andrew Morton <[email protected]>
Cc: Yury Norov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Amritha Nambiar <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Chris Wilson <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Miklos Szeredi <[email protected]>
Cc: Rasmus Villemoes <[email protected]>
Cc: Steffen Klassert <[email protected]>
Cc: "Tobin C . Harding" <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Willem de Bruijn <[email protected]>
Signed-off-by: Alexander Gordeev <[email protected]>
---
lib/bitmap.c | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/lib/bitmap.c b/lib/bitmap.c
index 89260aa342d6..a725e4612984 100644
--- a/lib/bitmap.c
+++ b/lib/bitmap.c
@@ -717,6 +717,19 @@ static const char *bitmap_get_x32_reverse(const char *start,
return end;
}
+#if defined(__BIG_ENDIAN) && defined(CONFIG_64BIT)
+static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
+{
+ maskp += (chunk_idx / 2);
+ ((u32 *)maskp)[(chunk_idx & 1) ^ 1] = chunk;
+}
+#else
+static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
+{
+ ((u32 *)maskp)[chunk_idx] = chunk;
+}
+#endif
+
/**
* bitmap_parse - convert an ASCII hex string into a bitmap.
* @start: pointer to buffer containing string.
@@ -738,7 +751,8 @@ int bitmap_parse(const char *start, unsigned int buflen,
{
const char *end = strnchrnul(start, buflen, '\n') - 1;
int chunks = BITS_TO_U32(nmaskbits);
- u32 *bitmap = (u32 *)maskp;
+ int chunk_idx = 0;
+ u32 chunk;
int unset_bit;
while (1) {
@@ -749,9 +763,11 @@ int bitmap_parse(const char *start, unsigned int buflen,
if (!chunks--)
return -EOVERFLOW;
- end = bitmap_get_x32_reverse(start, end, bitmap++);
+ end = bitmap_get_x32_reverse(start, end, &chunk);
if (IS_ERR(end))
return PTR_ERR(end);
+
+ save_x32_chunk(maskp, chunk, chunk_idx++);
}
unset_bit = (BITS_TO_U32(nmaskbits) - chunks) * 32;
--
2.23.0
On Mon, Jun 8, 2020 at 1:26 PM Alexander Gordeev <[email protected]> wrote:
>
> Commit 2d6261583be0 ("lib: rework bitmap_parse()") does not
> take into account order of halfwords on 64-bit big endian
> architectures. As result (at least) Receive Packet Steering,
> IRQ affinity masks and runtime kernel test "test_bitmap" get
> broken on s390.
...
> +#if defined(__BIG_ENDIAN) && defined(CONFIG_64BIT)
I think it's better to re-use existing patterns.
ipc/sem.c:1682:#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
> +static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
> +{
> + maskp += (chunk_idx / 2);
> + ((u32 *)maskp)[(chunk_idx & 1) ^ 1] = chunk;
> +}
> +#else
> +static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
> +{
> + ((u32 *)maskp)[chunk_idx] = chunk;
> +}
> +#endif
See below.
...
> - end = bitmap_get_x32_reverse(start, end, bitmap++);
> + end = bitmap_get_x32_reverse(start, end, &chunk);
> if (IS_ERR(end))
> return PTR_ERR(end);
> +
> + save_x32_chunk(maskp, chunk, chunk_idx++);
Can't we simple do
int chunk_index = 0;
...
do {
#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
end = bitmap_get_x32_reverse(start, end,
bitmap[chunk_index ^ 1]);
#else
end = bitmap_get_x32_reverse(start, end, bitmap[chunk_index]);
#endif
...
} while (++chunk_index);
?
--
With Best Regards,
Andy Shevchenko
On Mon, Jun 8, 2020 at 3:03 PM Andy Shevchenko
<[email protected]> wrote:
> On Mon, Jun 8, 2020 at 1:26 PM Alexander Gordeev <[email protected]> wrote:
...
> Can't we simple do
>
> int chunk_index = 0;
> ...
> do {
> #if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
> end = bitmap_get_x32_reverse(start, end,
> bitmap[chunk_index ^ 1]);
> #else
> end = bitmap_get_x32_reverse(start, end, bitmap[chunk_index]);
> #endif
> ...
> } while (++chunk_index);
>
> ?
And moreover, we simple can replace bitmap by maskp here, and drop it
from definition block.
--
With Best Regards,
Andy Shevchenko
On Mon, Jun 08, 2020 at 03:03:05PM +0300, Andy Shevchenko wrote:
> On Mon, Jun 8, 2020 at 1:26 PM Alexander Gordeev <[email protected]> wrote:
> >
> > Commit 2d6261583be0 ("lib: rework bitmap_parse()") does not
> > take into account order of halfwords on 64-bit big endian
> > architectures. As result (at least) Receive Packet Steering,
> > IRQ affinity masks and runtime kernel test "test_bitmap" get
> > broken on s390.
>
> ...
>
> > +#if defined(__BIG_ENDIAN) && defined(CONFIG_64BIT)
>
> I think it's better to re-use existing patterns.
>
> ipc/sem.c:1682:#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
>
> > +static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
> > +{
> > + maskp += (chunk_idx / 2);
> > + ((u32 *)maskp)[(chunk_idx & 1) ^ 1] = chunk;
> > +}
> > +#else
> > +static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
> > +{
> > + ((u32 *)maskp)[chunk_idx] = chunk;
> > +}
> > +#endif
>
> See below.
>
> ...
>
> > - end = bitmap_get_x32_reverse(start, end, bitmap++);
> > + end = bitmap_get_x32_reverse(start, end, &chunk);
> > if (IS_ERR(end))
> > return PTR_ERR(end);
> > +
> > + save_x32_chunk(maskp, chunk, chunk_idx++);
>
> Can't we simple do
>
> int chunk_index = 0;
> ...
> do {
> #if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
> end = bitmap_get_x32_reverse(start, end,
> bitmap[chunk_index ^ 1]);
> #else
> end = bitmap_get_x32_reverse(start, end, bitmap[chunk_index]);
> #endif
> ...
> } while (++chunk_index);
>
> ?
Well, unless we ignore coding style 21) Conditional Compilation
we could. Do you still insist it would be better?
Thanks for the review!
> --
> With Best Regards,
> Andy Shevchenko
On Mon, Jun 08, 2020 at 02:44:34PM +0200, Alexander Gordeev wrote:
> On Mon, Jun 08, 2020 at 03:03:05PM +0300, Andy Shevchenko wrote:
> > On Mon, Jun 8, 2020 at 1:26 PM Alexander Gordeev <[email protected]> wrote:
> > >
> > > Commit 2d6261583be0 ("lib: rework bitmap_parse()") does not
> > > take into account order of halfwords on 64-bit big endian
> > > architectures. As result (at least) Receive Packet Steering,
> > > IRQ affinity masks and runtime kernel test "test_bitmap" get
> > > broken on s390.
> >
> > ...
> >
> > > +#if defined(__BIG_ENDIAN) && defined(CONFIG_64BIT)
> >
> > I think it's better to re-use existing patterns.
> >
> > ipc/sem.c:1682:#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
> >
> > > +static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
> > > +{
> > > + maskp += (chunk_idx / 2);
> > > + ((u32 *)maskp)[(chunk_idx & 1) ^ 1] = chunk;
> > > +}
> > > +#else
> > > +static void save_x32_chunk(unsigned long *maskp, u32 chunk, int chunk_idx)
> > > +{
> > > + ((u32 *)maskp)[chunk_idx] = chunk;
> > > +}
> > > +#endif
> >
> > See below.
> >
> > ...
> >
> > > - end = bitmap_get_x32_reverse(start, end, bitmap++);
> > > + end = bitmap_get_x32_reverse(start, end, &chunk);
> > > if (IS_ERR(end))
> > > return PTR_ERR(end);
> > > +
> > > + save_x32_chunk(maskp, chunk, chunk_idx++);
> >
> > Can't we simple do
> >
> > int chunk_index = 0;
> > ...
> > do {
> > #if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
> > end = bitmap_get_x32_reverse(start, end,
> > bitmap[chunk_index ^ 1]);
> > #else
> > end = bitmap_get_x32_reverse(start, end, bitmap[chunk_index]);
> > #endif
> > ...
> > } while (++chunk_index);
> >
> > ?
>
> Well, unless we ignore coding style 21) Conditional Compilation
> we could. Do you still insist it would be better?
I think it's okay to do here
- it's not a big function
- it has no stub versions (you always do something)
- the result pretty much readable (5 lines any editor can keep on screen)
- and it's not ignoring, see "Wherever possible...", compare readability of
two versions, for yours reader needs to go somewhere to read, calculate and
return, when everything already being forgotten
- last but not least, I bet it makes code shorter (at least in C)
--
With Best Regards,
Andy Shevchenko