__int128 is not supported for some 32-bit platforms (arm and i386).
__int128 was used in carrying out computations on bitfields which
aid display, but the same calculations could be done with __u64
with the small effect of not supporting 128-bit bitfields.
With these changes, a big-endian issue with casting 128-bit integers
to 64-bit for enum bitfields is solved also, as we now use 64-bit
integers for bitfield calculations.
Reported-by: Naresh Kamboju <[email protected]>
Reported-by: Linux Kernel Functional Testing <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
---
tools/lib/bpf/btf_dump.c | 62 +++++++++++++++++++++++++++---------------------
1 file changed, 35 insertions(+), 27 deletions(-)
diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
index accf6fe..4a25512 100644
--- a/tools/lib/bpf/btf_dump.c
+++ b/tools/lib/bpf/btf_dump.c
@@ -1552,28 +1552,15 @@ static int btf_dump_unsupported_data(struct btf_dump *d,
return -ENOTSUP;
}
-static void btf_dump_int128(struct btf_dump *d,
- const struct btf_type *t,
- const void *data)
-{
- __int128 num = *(__int128 *)data;
-
- if ((num >> 64) == 0)
- btf_dump_type_values(d, "0x%llx", (long long)num);
- else
- btf_dump_type_values(d, "0x%llx%016llx", (long long)num >> 32,
- (long long)num);
-}
-
-static unsigned __int128 btf_dump_bitfield_get_data(struct btf_dump *d,
- const struct btf_type *t,
- const void *data,
- __u8 bits_offset,
- __u8 bit_sz)
+static __u64 btf_dump_bitfield_get_data(struct btf_dump *d,
+ const struct btf_type *t,
+ const void *data,
+ __u8 bits_offset,
+ __u8 bit_sz)
{
__u16 left_shift_bits, right_shift_bits;
__u8 nr_copy_bits, nr_copy_bytes;
- unsigned __int128 num = 0, ret;
+ __u64 num = 0, ret;
const __u8 *bytes = data;
int i;
@@ -1591,8 +1578,8 @@ static unsigned __int128 btf_dump_bitfield_get_data(struct btf_dump *d,
#else
# error "Unrecognized __BYTE_ORDER__"
#endif
- left_shift_bits = 128 - nr_copy_bits;
- right_shift_bits = 128 - bit_sz;
+ left_shift_bits = 64 - nr_copy_bits;
+ right_shift_bits = 64 - bit_sz;
ret = (num << left_shift_bits) >> right_shift_bits;
@@ -1605,7 +1592,7 @@ static int btf_dump_bitfield_check_zero(struct btf_dump *d,
__u8 bits_offset,
__u8 bit_sz)
{
- __int128 check_num;
+ __u64 check_num;
check_num = btf_dump_bitfield_get_data(d, t, data, bits_offset, bit_sz);
if (check_num == 0)
@@ -1619,10 +1606,11 @@ static int btf_dump_bitfield_data(struct btf_dump *d,
__u8 bits_offset,
__u8 bit_sz)
{
- unsigned __int128 print_num;
+ __u64 print_num;
print_num = btf_dump_bitfield_get_data(d, t, data, bits_offset, bit_sz);
- btf_dump_int128(d, t, &print_num);
+
+ btf_dump_type_values(d, "0x%llx", (unsigned long long)print_num);
return 0;
}
@@ -1681,9 +1669,29 @@ static int btf_dump_int_data(struct btf_dump *d,
return btf_dump_bitfield_data(d, t, data, 0, 0);
switch (sz) {
- case 16:
- btf_dump_int128(d, t, data);
+ case 16: {
+ const __u64 *ints = data;
+ __u64 lsi, msi;
+
+ /* avoid use of __int128 as some 32-bit platforms do not
+ * support it.
+ */
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+ lsi = ints[0];
+ msi = ints[1];
+#elif __BYTE_ORDER == __BIG_ENDIAN
+ lsi = ints[1];
+ msi = ints[0];
+#else
+# error "Unrecognized __BYTE_ORDER__"
+#endif
+ if (msi == 0)
+ btf_dump_type_values(d, "0x%llx", (unsigned long long)lsi);
+ else
+ btf_dump_type_values(d, "0x%llx%016llx", (unsigned long long)msi,
+ (unsigned long long)lsi);
break;
+ }
case 8:
if (sign)
btf_dump_type_values(d, "%lld", *(long long *)data);
@@ -2209,7 +2217,7 @@ static int btf_dump_dump_type_data(struct btf_dump *d,
case BTF_KIND_ENUM:
/* handle bitfield and int enum values */
if (bit_sz) {
- unsigned __int128 print_num;
+ __u64 print_num;
__s64 enum_val;
print_num = btf_dump_bitfield_get_data(d, t, data, bits_offset, bit_sz);
--
1.8.3.1
On Mon, Jul 19, 2021 at 2:41 PM Alan Maguire <[email protected]> wrote:
>
> __int128 is not supported for some 32-bit platforms (arm and i386).
> __int128 was used in carrying out computations on bitfields which
> aid display, but the same calculations could be done with __u64
> with the small effect of not supporting 128-bit bitfields.
>
> With these changes, a big-endian issue with casting 128-bit integers
> to 64-bit for enum bitfields is solved also, as we now use 64-bit
> integers for bitfield calculations.
>
> Reported-by: Naresh Kamboju <[email protected]>
> Reported-by: Linux Kernel Functional Testing <[email protected]>
> Signed-off-by: Alan Maguire <[email protected]>
> ---
Changes look good to me, thanks. But they didn't appear in patchworks
yet so I can't easily test and apply them. It might be because of
patchworks delay or due to a very long CC list. Try trimming the cc
list down and re-submit?
Also, while I agree that supporting 128-bit bitfields isn't important,
I wonder if we should warn/error on that (instead of shifting by
negative amount and reporting some garbage value), what do you think?
Is there one place in the code where we can error out early if the
type actually has bitfield with > 64 bits? I'd prefer to keep
btf_dump_bitfield_get_data() itself non-failing though.
> tools/lib/bpf/btf_dump.c | 62 +++++++++++++++++++++++++++---------------------
> 1 file changed, 35 insertions(+), 27 deletions(-)
>
[...]
On Mon, 19 Jul 2021, Andrii Nakryiko wrote:
> On Mon, Jul 19, 2021 at 2:41 PM Alan Maguire <[email protected]> wrote:
> >
> > __int128 is not supported for some 32-bit platforms (arm and i386).
> > __int128 was used in carrying out computations on bitfields which
> > aid display, but the same calculations could be done with __u64
> > with the small effect of not supporting 128-bit bitfields.
> >
> > With these changes, a big-endian issue with casting 128-bit integers
> > to 64-bit for enum bitfields is solved also, as we now use 64-bit
> > integers for bitfield calculations.
> >
> > Reported-by: Naresh Kamboju <[email protected]>
> > Reported-by: Linux Kernel Functional Testing <[email protected]>
> > Signed-off-by: Alan Maguire <[email protected]>
> > ---
>
> Changes look good to me, thanks. But they didn't appear in patchworks
> yet so I can't easily test and apply them. It might be because of
> patchworks delay or due to a very long CC list. Try trimming the cc
> list down and re-submit?
>
Done, looks like the v2 with the trimmed cc list made it into patchwork
this time.
> Also, while I agree that supporting 128-bit bitfields isn't important,
> I wonder if we should warn/error on that (instead of shifting by
> negative amount and reporting some garbage value), what do you think?
> Is there one place in the code where we can error out early if the
> type actually has bitfield with > 64 bits? I'd prefer to keep
> btf_dump_bitfield_get_data() itself non-failing though.
>
Sorry, I missed the last part and made that function fail since
it's probably the easiest place to capture too-large bitfields.
I renamed it to btf_dump_get_bitfield_value() to match
btf_dump_get_enum_value() which as a similar function signature
(return int, pass in a pointer to the value we want to retrieve).
We can't localize bitfield size checking to
btf_dump_type_data_check_zero() because - depending on flags -
the associated checks might not be carried out. So duplication
of bitfield size checks between the zero checking and bitfield/enum
bitfield display seems inevitable, and that being the case, the
extra error checking required around btf_dump_get_bitfield_value()
seems to be required.
I might be missing a better approach here of course; let me know what you
think. Thanks again!
Alan
On Tue, Jul 20, 2021 at 2:14 AM Alan Maguire <[email protected]> wrote:
>
> On Mon, 19 Jul 2021, Andrii Nakryiko wrote:
>
> > On Mon, Jul 19, 2021 at 2:41 PM Alan Maguire <[email protected]> wrote:
> > >
> > > __int128 is not supported for some 32-bit platforms (arm and i386).
> > > __int128 was used in carrying out computations on bitfields which
> > > aid display, but the same calculations could be done with __u64
> > > with the small effect of not supporting 128-bit bitfields.
> > >
> > > With these changes, a big-endian issue with casting 128-bit integers
> > > to 64-bit for enum bitfields is solved also, as we now use 64-bit
> > > integers for bitfield calculations.
> > >
> > > Reported-by: Naresh Kamboju <[email protected]>
> > > Reported-by: Linux Kernel Functional Testing <[email protected]>
> > > Signed-off-by: Alan Maguire <[email protected]>
> > > ---
> >
> > Changes look good to me, thanks. But they didn't appear in patchworks
> > yet so I can't easily test and apply them. It might be because of
> > patchworks delay or due to a very long CC list. Try trimming the cc
> > list down and re-submit?
> >
>
> Done, looks like the v2 with the trimmed cc list made it into patchwork
> this time.
v1 also made it to the list right after I wrote the email :)
>
> > Also, while I agree that supporting 128-bit bitfields isn't important,
> > I wonder if we should warn/error on that (instead of shifting by
> > negative amount and reporting some garbage value), what do you think?
> > Is there one place in the code where we can error out early if the
> > type actually has bitfield with > 64 bits? I'd prefer to keep
> > btf_dump_bitfield_get_data() itself non-failing though.
> >
>
> Sorry, I missed the last part and made that function fail since
> it's probably the easiest place to capture too-large bitfields.
> I renamed it to btf_dump_get_bitfield_value() to match
> btf_dump_get_enum_value() which as a similar function signature
> (return int, pass in a pointer to the value we want to retrieve).
>
> We can't localize bitfield size checking to
> btf_dump_type_data_check_zero() because - depending on flags -
> the associated checks might not be carried out. So duplication
> of bitfield size checks between the zero checking and bitfield/enum
> bitfield display seems inevitable, and that being the case, the
> extra error checking required around btf_dump_get_bitfield_value()
> seems to be required.
>
> I might be missing a better approach here of course; let me know what you
> think. Thanks again!
Nah, that's fine. Looks good. Testing and pushing in a few minutes. Thanks.
>
> Alan