2022-04-16 02:28:33

by Alexander Lobakin

[permalink] [raw]
Subject: [PATCH bpf-next 08/11] samples: bpf: fix shifting unsigned long by 32 positions

On 32 bit systems, shifting an unsigned long by 32 positions
yields the following warning:

samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow]
unsigned int hi = v >> 32;
^ ~~

The usual way to avoid this is to shift by 16 two times (see
upper_32_bits() macro in the kernel). Use it across the BPF sample
code as well.

Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall")
Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing")
Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF")
Signed-off-by: Alexander Lobakin <[email protected]>
---
samples/bpf/lathist_kern.c | 2 +-
samples/bpf/lwt_len_hist_kern.c | 2 +-
samples/bpf/tracex2_kern.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/samples/bpf/lathist_kern.c b/samples/bpf/lathist_kern.c
index 4adfcbbe6ef4..9744ed547abe 100644
--- a/samples/bpf/lathist_kern.c
+++ b/samples/bpf/lathist_kern.c
@@ -53,7 +53,7 @@ static unsigned int log2(unsigned int v)

static unsigned int log2l(unsigned long v)
{
- unsigned int hi = v >> 32;
+ unsigned int hi = (v >> 16) >> 16;

if (hi)
return log2(hi) + 32;
diff --git a/samples/bpf/lwt_len_hist_kern.c b/samples/bpf/lwt_len_hist_kern.c
index 1fa14c54963a..bf32fa04c91f 100644
--- a/samples/bpf/lwt_len_hist_kern.c
+++ b/samples/bpf/lwt_len_hist_kern.c
@@ -49,7 +49,7 @@ static unsigned int log2(unsigned int v)

static unsigned int log2l(unsigned long v)
{
- unsigned int hi = v >> 32;
+ unsigned int hi = (v >> 16) >> 16;
if (hi)
return log2(hi) + 32;
else
diff --git a/samples/bpf/tracex2_kern.c b/samples/bpf/tracex2_kern.c
index 5bc696bac27d..6bf22056ff95 100644
--- a/samples/bpf/tracex2_kern.c
+++ b/samples/bpf/tracex2_kern.c
@@ -57,7 +57,7 @@ static unsigned int log2(unsigned int v)

static unsigned int log2l(unsigned long v)
{
- unsigned int hi = v >> 32;
+ unsigned int hi = (v >> 16) >> 16;
if (hi)
return log2(hi) + 32;
else
--
2.35.2



2022-04-22 20:08:13

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next 08/11] samples: bpf: fix shifting unsigned long by 32 positions

On Thu, Apr 14, 2022 at 3:46 PM Alexander Lobakin <[email protected]> wrote:
>
> On 32 bit systems, shifting an unsigned long by 32 positions
> yields the following warning:
>
> samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow]
> unsigned int hi = v >> 32;
> ^ ~~
>

long is always 64-bit in BPF, but I suspect this is due to
samples/bpf/Makefile still using this clang + llc combo, where clang
is called with native target and llc for -target bpf. Not sure if we
are ready to ditch that complicated combination. Yonghong, do we still
need that or can we just use -target bpf in samples/bpf?


> The usual way to avoid this is to shift by 16 two times (see
> upper_32_bits() macro in the kernel). Use it across the BPF sample
> code as well.
>
> Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall")
> Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing")
> Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF")
> Signed-off-by: Alexander Lobakin <[email protected]>
> ---
> samples/bpf/lathist_kern.c | 2 +-
> samples/bpf/lwt_len_hist_kern.c | 2 +-
> samples/bpf/tracex2_kern.c | 2 +-
> 3 files changed, 3 insertions(+), 3 deletions(-)
>

[...]

2022-04-27 16:29:12

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH bpf-next 08/11] samples: bpf: fix shifting unsigned long by 32 positions



On 4/20/22 10:18 AM, Andrii Nakryiko wrote:
> On Thu, Apr 14, 2022 at 3:46 PM Alexander Lobakin <[email protected]> wrote:
>>
>> On 32 bit systems, shifting an unsigned long by 32 positions
>> yields the following warning:
>>
>> samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow]
>> unsigned int hi = v >> 32;
>> ^ ~~
>>
>
> long is always 64-bit in BPF, but I suspect this is due to
> samples/bpf/Makefile still using this clang + llc combo, where clang
> is called with native target and llc for -target bpf. Not sure if we
> are ready to ditch that complicated combination. Yonghong, do we still
> need that or can we just use -target bpf in samples/bpf?

Current most bpf programs in samples/bpf do not use vmlinux.h and CO-RE.
They direct use kernel header files. That is why clang C -> IR
compilation still needs to be native.

We could just use -target bpf for the whole compilation but that needs
to change the code to use vmlinux.h and CO-RE. There are already a
couple of sample bpf programs did this.

>
>
>> The usual way to avoid this is to shift by 16 two times (see
>> upper_32_bits() macro in the kernel). Use it across the BPF sample
>> code as well.
>>
>> Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall")
>> Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing")
>> Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF")
>> Signed-off-by: Alexander Lobakin <[email protected]>
>> ---
>> samples/bpf/lathist_kern.c | 2 +-
>> samples/bpf/lwt_len_hist_kern.c | 2 +-
>> samples/bpf/tracex2_kern.c | 2 +-
>> 3 files changed, 3 insertions(+), 3 deletions(-)
>>
>
> [...]

2022-04-27 19:25:20

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH bpf-next 08/11] samples: bpf: fix shifting unsigned long by 32 positions

On Wed, Apr 27, 2022 at 8:55 AM Yonghong Song <[email protected]> wrote:
>
>
>
> On 4/20/22 10:18 AM, Andrii Nakryiko wrote:
> > On Thu, Apr 14, 2022 at 3:46 PM Alexander Lobakin <[email protected]> wrote:
> >>
> >> On 32 bit systems, shifting an unsigned long by 32 positions
> >> yields the following warning:
> >>
> >> samples/bpf/tracex2_kern.c:60:23: warning: shift count >= width of type [-Wshift-count-overflow]
> >> unsigned int hi = v >> 32;
> >> ^ ~~
> >>
> >
> > long is always 64-bit in BPF, but I suspect this is due to
> > samples/bpf/Makefile still using this clang + llc combo, where clang
> > is called with native target and llc for -target bpf. Not sure if we
> > are ready to ditch that complicated combination. Yonghong, do we still
> > need that or can we just use -target bpf in samples/bpf?
>
> Current most bpf programs in samples/bpf do not use vmlinux.h and CO-RE.
> They direct use kernel header files. That is why clang C -> IR
> compilation still needs to be native.
>
> We could just use -target bpf for the whole compilation but that needs
> to change the code to use vmlinux.h and CO-RE. There are already a
> couple of sample bpf programs did this.

Right, I guess I'm proposing to switch samples/bpf to vmlinux.h. Only
purely networking BPF apps can get away with not using vmlinux.h
because they might avoid dependency on kernel types. But even then a
lot of modern networking apps seem to be gaining elements of more
generic tracing and would rely on CO-RE for staying "portable" between
kernels. So it might be totally fine to just use CO-RE universally in
samples/bpf?

>
> >
> >
> >> The usual way to avoid this is to shift by 16 two times (see
> >> upper_32_bits() macro in the kernel). Use it across the BPF sample
> >> code as well.
> >>
> >> Fixes: d822a1926849 ("samples/bpf: Add counting example for kfree_skb() function calls and the write() syscall")
> >> Fixes: 0fb1170ee68a ("bpf: BPF based latency tracing")
> >> Fixes: f74599f7c530 ("bpf: Add tests and samples for LWT-BPF")
> >> Signed-off-by: Alexander Lobakin <[email protected]>
> >> ---
> >> samples/bpf/lathist_kern.c | 2 +-
> >> samples/bpf/lwt_len_hist_kern.c | 2 +-
> >> samples/bpf/tracex2_kern.c | 2 +-
> >> 3 files changed, 3 insertions(+), 3 deletions(-)
> >>
> >
> > [...]