2020-03-30 10:24:30

by Mao Wenan

[permalink] [raw]
Subject: [PATCH net] veth: xdp: use head instead of hard_start

xdp.data_hard_start is mapped to the first
address of xdp_frame, but the pointer hard_start
is the offset(sizeof(struct xdp_frame)) of xdp_frame,
it should use head instead of hard_start to
set xdp.data_hard_start. Otherwise, if BPF program
calls helper_function such as bpf_xdp_adjust_head, it
will be confused for xdp_frame_end.

Signed-off-by: Mao Wenan <[email protected]>
---
drivers/net/veth.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index d4cbb9e8c63f..5ea550884bf8 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
struct xdp_buff xdp;
u32 act;

- xdp.data_hard_start = hard_start;
+ xdp.data_hard_start = head;
xdp.data = frame->data;
xdp.data_end = frame->data + frame->len;
xdp.data_meta = frame->data - frame->metasize;
--
2.20.1


2020-03-30 11:36:48

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net] veth: xdp: use head instead of hard_start

On Mon, 30 Mar 2020 18:26:31 +0800
Mao Wenan <[email protected]> wrote:

> xdp.data_hard_start is mapped to the first
> address of xdp_frame, but the pointer hard_start
> is the offset(sizeof(struct xdp_frame)) of xdp_frame,
> it should use head instead of hard_start to
> set xdp.data_hard_start. Otherwise, if BPF program
> calls helper_function such as bpf_xdp_adjust_head, it
> will be confused for xdp_frame_end.

I have noticed this[1] and have a patch in my current patchset for
fixing this. IMHO is is not so important fix right now, as the effect
is that you currently only lose 32 bytes of headroom.

[1] https://lore.kernel.org/netdev/158446621887.702578.17234304084556809684.stgit@firesoul/

Fixing this now is going to be annoying and cause merge conflicts for
my patchset. If you insist on fixing this now, you need to improve
commit message and also fix patch, see below.

> Signed-off-by: Mao Wenan <[email protected]>
> ---
> drivers/net/veth.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index d4cbb9e8c63f..5ea550884bf8 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> struct xdp_buff xdp;
> u32 act;
>
> - xdp.data_hard_start = hard_start;
> + xdp.data_hard_start = head;

You also need update/remove the other lines doing this.

> xdp.data = frame->data;
> xdp.data_end = frame->data + frame->len;
> xdp.data_meta = frame->data - frame->metasize;



--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2020-03-30 23:37:44

by Toshiaki Makita

[permalink] [raw]
Subject: Re: [PATCH net] veth: xdp: use head instead of hard_start

Hi Mao & Jesper
(Resending with plain text...)

On 2020/03/30 20:34, Jesper Dangaard Brouer wrote:
> On Mon, 30 Mar 2020 18:26:31 +0800
> Mao Wenan <[email protected]> wrote:
>
>> xdp.data_hard_start is mapped to the first
>> address of xdp_frame, but the pointer hard_start
>> is the offset(sizeof(struct xdp_frame)) of xdp_frame,
>> it should use head instead of hard_start to
>> set xdp.data_hard_start. Otherwise, if BPF program
>> calls helper_function such as bpf_xdp_adjust_head, it
>> will be confused for xdp_frame_end.
>
> I have noticed this[1] and have a patch in my current patchset for
> fixing this. IMHO is is not so important fix right now, as the effect
> is that you currently only lose 32 bytes of headroom.
>
> [1] https://lore.kernel.org/netdev/158446621887.702578.17234304084556809684.stgit@firesoul/

You are right, the subtraction is not necessary here.
Thank you for working on this.

Toshiaki Makita

2020-03-31 03:57:32

by Mao Wenan

[permalink] [raw]
Subject: Re: [PATCH net] veth: xdp: use head instead of hard_start

On 2020/3/31 7:35, Toshiaki Makita wrote:
> Hi Mao & Jesper
> (Resending with plain text...)
>
> On 2020/03/30 20:34, Jesper Dangaard Brouer wrote:
>> On Mon, 30 Mar 2020 18:26:31 +0800
>> Mao Wenan <[email protected]> wrote:
>>
>>> xdp.data_hard_start is mapped to the first
>>> address of xdp_frame, but the pointer hard_start
>>> is the offset(sizeof(struct xdp_frame)) of xdp_frame,
>>> it should use head instead of hard_start to
>>> set xdp.data_hard_start. Otherwise, if BPF program
>>> calls helper_function such as bpf_xdp_adjust_head, it
>>> will be confused for xdp_frame_end.
>>
>> I have noticed this[1] and have a patch in my current patchset for
>> fixing this.  IMHO is is not so important fix right now, as the effect
>> is that you currently only lose 32 bytes of headroom.
>>
I consider that it is needed because bpf_xdp_adjust_head() just a common helper function,
veth as one driver application should keep the same as 32 bytes of headroom as other driver.
And convert_to_xdp_frame set() also store info in top of packet, and set:
xdp_frame = xdp->data_hard_start;

>> [1] https://lore.kernel.org/netdev/158446621887.702578.17234304084556809684.stgit@firesoul/
>
> You are right, the subtraction is not necessary here.
I guess you mean that previous subtraction is not necessary ? this line : void *head = hard_start - sizeof(struct xdp_frame); ?
But in the veth_xdp_rcv_one,below line will use head pointer,
case XDP_TX:
orig_frame = *frame;
xdp.data_hard_start = head;


> Thank you for working on this.
>
> Toshiaki Makita
>
> .


2020-03-31 05:46:06

by Toshiaki Makita

[permalink] [raw]
Subject: Re: [PATCH net] veth: xdp: use head instead of hard_start

On 2020/03/31 12:56, maowenan wrote:
> On 2020/3/31 7:35, Toshiaki Makita wrote:
>> Hi Mao & Jesper
>> (Resending with plain text...)
>>
>> On 2020/03/30 20:34, Jesper Dangaard Brouer wrote:
>>> On Mon, 30 Mar 2020 18:26:31 +0800
>>> Mao Wenan <[email protected]> wrote:
>>>
>>>> xdp.data_hard_start is mapped to the first
>>>> address of xdp_frame, but the pointer hard_start
>>>> is the offset(sizeof(struct xdp_frame)) of xdp_frame,
>>>> it should use head instead of hard_start to
>>>> set xdp.data_hard_start. Otherwise, if BPF program
>>>> calls helper_function such as bpf_xdp_adjust_head, it
>>>> will be confused for xdp_frame_end.
>>>
>>> I have noticed this[1] and have a patch in my current patchset for
>>> fixing this.  IMHO is is not so important fix right now, as the effect
>>> is that you currently only lose 32 bytes of headroom.
>>>
> I consider that it is needed because bpf_xdp_adjust_head() just a common helper function,
> veth as one driver application should keep the same as 32 bytes of headroom as other driver.
> And convert_to_xdp_frame set() also store info in top of packet, and set:
> xdp_frame = xdp->data_hard_start;
>
>>> [1] https://lore.kernel.org/netdev/158446621887.702578.17234304084556809684.stgit@firesoul/
>>
>> You are right, the subtraction is not necessary here.
> I guess you mean that previous subtraction is not necessary ? this line : void *head = hard_start - sizeof(struct xdp_frame); ?

No I just mean subtraction of headroom is not necessary, and I noticed this
description was confusing. Sorry about that.
You can use "head" for data_hard_start.

Toshiaki Makita

2020-03-31 06:04:48

by Mao Wenan

[permalink] [raw]
Subject: [PATCH net v2] veth: xdp: use head instead of hard_start

xdp.data_hard_start is equal to first address of
struct xdp_frame, which is mentioned in
convert_to_xdp_frame(). But the pointer hard_start
in veth_xdp_rcv_one() is 32 bytes offset of frame,
so it should use head instead of hard_start to
set xdp.data_hard_start. Otherwise, if BPF program
calls helper_function such as bpf_xdp_adjust_head, it
will be confused for xdp_frame_end.

Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
Signed-off-by: Mao Wenan <[email protected]>
---
v2: add fixes tag, as well as commit log.
drivers/net/veth.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index d4cbb9e8c63f..5ea550884bf8 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
struct xdp_buff xdp;
u32 act;

- xdp.data_hard_start = hard_start;
+ xdp.data_hard_start = head;
xdp.data = frame->data;
xdp.data_end = frame->data + frame->len;
xdp.data_meta = frame->data - frame->metasize;
--
2.20.1

2020-03-31 06:16:59

by Toshiaki Makita

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On 2020/03/31 15:06, Mao Wenan wrote:
> xdp.data_hard_start is equal to first address of
> struct xdp_frame, which is mentioned in
> convert_to_xdp_frame(). But the pointer hard_start
> in veth_xdp_rcv_one() is 32 bytes offset of frame,
> so it should use head instead of hard_start to
> set xdp.data_hard_start. Otherwise, if BPF program
> calls helper_function such as bpf_xdp_adjust_head, it
> will be confused for xdp_frame_end.

I think you should discuss this more with Jesper before
submitting v2.
He does not like this to be included now due to merge conflict risk.
Basically I agree with him that we don't need to hurry with this fix.

Toshiaki Makita

>
> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> Signed-off-by: Mao Wenan <[email protected]>
> ---
> v2: add fixes tag, as well as commit log.
> drivers/net/veth.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index d4cbb9e8c63f..5ea550884bf8 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> struct xdp_buff xdp;
> u32 act;
>
> - xdp.data_hard_start = hard_start;
> + xdp.data_hard_start = head;
> xdp.data = frame->data;
> xdp.data_end = frame->data + frame->len;
> xdp.data_meta = frame->data - frame->metasize;
>

2020-04-01 16:16:16

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On Tue, 31 Mar 2020 15:16:22 +0900
Toshiaki Makita <[email protected]> wrote:

> On 2020/03/31 15:06, Mao Wenan wrote:
> > xdp.data_hard_start is equal to first address of
> > struct xdp_frame, which is mentioned in
> > convert_to_xdp_frame(). But the pointer hard_start
> > in veth_xdp_rcv_one() is 32 bytes offset of frame,
> > so it should use head instead of hard_start to
> > set xdp.data_hard_start. Otherwise, if BPF program
> > calls helper_function such as bpf_xdp_adjust_head, it
> > will be confused for xdp_frame_end.
>
> I think you should discuss this more with Jesper before
> submitting v2.
> He does not like this to be included now due to merge conflict risk.
> Basically I agree with him that we don't need to hurry with this fix.
>
> Toshiaki Makita
>
> >
> > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > Signed-off-by: Mao Wenan <[email protected]>
> > ---
> > v2: add fixes tag, as well as commit log.
> > drivers/net/veth.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > index d4cbb9e8c63f..5ea550884bf8 100644
> > --- a/drivers/net/veth.c
> > +++ b/drivers/net/veth.c
> > @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> > struct xdp_buff xdp;
> > u32 act;
> >
> > - xdp.data_hard_start = hard_start;
> > + xdp.data_hard_start = head;
> > xdp.data = frame->data;
> > xdp.data_end = frame->data + frame->len;
> > xdp.data_meta = frame->data - frame->metasize;
> >

Below is the patch that I have in my queue. I've added a Reported-by
tag to give you some credit, even-though I already had plans to fix
this, as part of my XDP frame_sz work.


[PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames

When native XDP redirect into a veth device, the frame arrives in the
xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
which can run a new XDP bpf_prog on the packet. Doing so requires
converting xdp_frame to xdp_buff, but the tricky part is that
xdp_frame memory area is located in the top (data_hard_start) memory
area that xdp_buff will point into.

The current code tried to protect the xdp_frame area, by assigning
xdp_buff.data_hard_start past this memory. This results in 32 bytes
less headroom to expand into via BPF-helper bpf_xdp_adjust_head().

This protect step is actually not needed, because BPF-helper
bpf_xdp_adjust_head() already reserve this area, and don't allow
BPF-prog to expand into it. Thus, it is safe to point data_hard_start
directly at xdp_frame memory area.

Cc: Toshiaki Makita <[email protected]>
Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
Reported-by: Mao Wenan <[email protected]>
Signed-off-by: Jesper Dangaard Brouer <[email protected]>
---
drivers/net/veth.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 8cdc4415fa70..2edc04a8ab8e 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -493,13 +493,15 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
struct veth_xdp_tx_bq *bq)
{
void *hard_start = frame->data - frame->headroom;
- void *head = hard_start - sizeof(struct xdp_frame);
int len = frame->len, delta = 0;
struct xdp_frame orig_frame;
struct bpf_prog *xdp_prog;
unsigned int headroom;
struct sk_buff *skb;

+ /* bpf_xdp_adjust_head() assures BPF cannot access xdp_frame area */
+ hard_start -= sizeof(struct xdp_frame);
+
rcu_read_lock();
xdp_prog = rcu_dereference(rq->xdp_prog);
if (likely(xdp_prog)) {
@@ -521,7 +523,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
break;
case XDP_TX:
orig_frame = *frame;
- xdp.data_hard_start = head;
xdp.rxq->mem = frame->mem;
if (unlikely(veth_xdp_tx(rq->dev, &xdp, bq) < 0)) {
trace_xdp_exception(rq->dev, xdp_prog, act);
@@ -533,7 +534,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
goto xdp_xmit;
case XDP_REDIRECT:
orig_frame = *frame;
- xdp.data_hard_start = head;
xdp.rxq->mem = frame->mem;
if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
frame = &orig_frame;
@@ -555,7 +555,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
rcu_read_unlock();

headroom = sizeof(struct xdp_frame) + frame->headroom - delta;
- skb = veth_build_skb(head, headroom, len, 0);
+ skb = veth_build_skb(hard_start, headroom, len, 0);
if (!skb) {
xdp_return_frame(frame);
goto err;


--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2020-04-02 00:48:14

by Toshiaki Makita

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
...
> [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
>
> When native XDP redirect into a veth device, the frame arrives in the
> xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> which can run a new XDP bpf_prog on the packet. Doing so requires
> converting xdp_frame to xdp_buff, but the tricky part is that
> xdp_frame memory area is located in the top (data_hard_start) memory
> area that xdp_buff will point into.
>
> The current code tried to protect the xdp_frame area, by assigning
> xdp_buff.data_hard_start past this memory. This results in 32 bytes
> less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
>
> This protect step is actually not needed, because BPF-helper
> bpf_xdp_adjust_head() already reserve this area, and don't allow
> BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> directly at xdp_frame memory area.
>
> Cc: Toshiaki Makita <[email protected]>

FYI: This mail address is deprecated.

> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> Reported-by: Mao Wenan <[email protected]>
> Signed-off-by: Jesper Dangaard Brouer <[email protected]>

FWIW,

Acked-by: Toshiaki Makita <[email protected]>

2020-04-02 01:25:09

by Mao Wenan

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On 2020/4/2 0:15, Jesper Dangaard Brouer wrote:
> On Tue, 31 Mar 2020 15:16:22 +0900
> Toshiaki Makita <[email protected]> wrote:
>
>> On 2020/03/31 15:06, Mao Wenan wrote:
>>> xdp.data_hard_start is equal to first address of
>>> struct xdp_frame, which is mentioned in
>>> convert_to_xdp_frame(). But the pointer hard_start
>>> in veth_xdp_rcv_one() is 32 bytes offset of frame,
>>> so it should use head instead of hard_start to
>>> set xdp.data_hard_start. Otherwise, if BPF program
>>> calls helper_function such as bpf_xdp_adjust_head, it
>>> will be confused for xdp_frame_end.
>>
>> I think you should discuss this more with Jesper before
>> submitting v2.
>> He does not like this to be included now due to merge conflict risk.
>> Basically I agree with him that we don't need to hurry with this fix.
>>
>> Toshiaki Makita
>>
>>>
>>> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
>>> Signed-off-by: Mao Wenan <[email protected]>
>>> ---
>>> v2: add fixes tag, as well as commit log.
>>> drivers/net/veth.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
>>> index d4cbb9e8c63f..5ea550884bf8 100644
>>> --- a/drivers/net/veth.c
>>> +++ b/drivers/net/veth.c
>>> @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
>>> struct xdp_buff xdp;
>>> u32 act;
>>>
>>> - xdp.data_hard_start = hard_start;
>>> + xdp.data_hard_start = head;
>>> xdp.data = frame->data;
>>> xdp.data_end = frame->data + frame->len;
>>> xdp.data_meta = frame->data - frame->metasize;
>>>
>
> Below is the patch that I have in my queue. I've added a Reported-by
> tag to give you some credit, even-though I already had plans to fix
> this, as part of my XDP frame_sz work.
thanks for reported-by.
Actually the fault is found by reviewing veth code two weeks ago,
when I debugged another warning bpf_warn_invalid_xdp_action associated veth
module, and there is no chance to send such fix patch as quick.

>
>
> [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
>
> When native XDP redirect into a veth device, the frame arrives in the
> xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> which can run a new XDP bpf_prog on the packet. Doing so requires
> converting xdp_frame to xdp_buff, but the tricky part is that
> xdp_frame memory area is located in the top (data_hard_start) memory
> area that xdp_buff will point into.
>
> The current code tried to protect the xdp_frame area, by assigning
> xdp_buff.data_hard_start past this memory. This results in 32 bytes
> less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
>
> This protect step is actually not needed, because BPF-helper
> bpf_xdp_adjust_head() already reserve this area, and don't allow
> BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> directly at xdp_frame memory area.
>
> Cc: Toshiaki Makita <[email protected]>
> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> Reported-by: Mao Wenan <[email protected]>
> Signed-off-by: Jesper Dangaard Brouer <[email protected]>
> ---
> drivers/net/veth.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 8cdc4415fa70..2edc04a8ab8e 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -493,13 +493,15 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> struct veth_xdp_tx_bq *bq)
> {
> void *hard_start = frame->data - frame->headroom;
> - void *head = hard_start - sizeof(struct xdp_frame);
> int len = frame->len, delta = 0;
> struct xdp_frame orig_frame;
> struct bpf_prog *xdp_prog;
> unsigned int headroom;
> struct sk_buff *skb;
>
> + /* bpf_xdp_adjust_head() assures BPF cannot access xdp_frame area */
> + hard_start -= sizeof(struct xdp_frame);
> +
> rcu_read_lock();
> xdp_prog = rcu_dereference(rq->xdp_prog);
> if (likely(xdp_prog)) {
> @@ -521,7 +523,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> break;
> case XDP_TX:
> orig_frame = *frame;
> - xdp.data_hard_start = head;
> xdp.rxq->mem = frame->mem;
> if (unlikely(veth_xdp_tx(rq->dev, &xdp, bq) < 0)) {
> trace_xdp_exception(rq->dev, xdp_prog, act);
> @@ -533,7 +534,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> goto xdp_xmit;
> case XDP_REDIRECT:
> orig_frame = *frame;
> - xdp.data_hard_start = head;
> xdp.rxq->mem = frame->mem;
> if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
> frame = &orig_frame;
> @@ -555,7 +555,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> rcu_read_unlock();
>
> headroom = sizeof(struct xdp_frame) + frame->headroom - delta;
> - skb = veth_build_skb(head, headroom, len, 0);
> + skb = veth_build_skb(hard_start, headroom, len, 0);
> if (!skb) {
> xdp_return_frame(frame);
> goto err;
>
>


2020-04-02 09:07:13

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On Thu, 2 Apr 2020 09:47:03 +0900
Toshiaki Makita <[email protected]> wrote:

> On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
> ...
> > [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> >
> > When native XDP redirect into a veth device, the frame arrives in the
> > xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> > which can run a new XDP bpf_prog on the packet. Doing so requires
> > converting xdp_frame to xdp_buff, but the tricky part is that
> > xdp_frame memory area is located in the top (data_hard_start) memory
> > area that xdp_buff will point into.
> >
> > The current code tried to protect the xdp_frame area, by assigning
> > xdp_buff.data_hard_start past this memory. This results in 32 bytes
> > less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> >
> > This protect step is actually not needed, because BPF-helper
> > bpf_xdp_adjust_head() already reserve this area, and don't allow
> > BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> > directly at xdp_frame memory area.
> >
> > Cc: Toshiaki Makita <[email protected]>
>
> FYI: This mail address is deprecated.
>
> > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > Reported-by: Mao Wenan <[email protected]>
> > Signed-off-by: Jesper Dangaard Brouer <[email protected]>
>
> FWIW,
>
> Acked-by: Toshiaki Makita <[email protected]>

Thanks.

I have updated your email and added your ack in my patchset. I will
submit this officially once net-next opens up again[1], as part my
larger patchset for introducing XDP frame_sz.

[1] http://vger.kernel.org/~davem/net-next.html
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2020-04-02 15:43:16

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On Thu, Apr 2, 2020 at 2:06 AM Jesper Dangaard Brouer <[email protected]> wrote:
>
> On Thu, 2 Apr 2020 09:47:03 +0900
> Toshiaki Makita <[email protected]> wrote:
>
> > On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
> > ...
> > > [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> > >
> > > When native XDP redirect into a veth device, the frame arrives in the
> > > xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> > > which can run a new XDP bpf_prog on the packet. Doing so requires
> > > converting xdp_frame to xdp_buff, but the tricky part is that
> > > xdp_frame memory area is located in the top (data_hard_start) memory
> > > area that xdp_buff will point into.
> > >
> > > The current code tried to protect the xdp_frame area, by assigning
> > > xdp_buff.data_hard_start past this memory. This results in 32 bytes
> > > less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> > >
> > > This protect step is actually not needed, because BPF-helper
> > > bpf_xdp_adjust_head() already reserve this area, and don't allow
> > > BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> > > directly at xdp_frame memory area.
> > >
> > > Cc: Toshiaki Makita <[email protected]>
> >
> > FYI: This mail address is deprecated.
> >
> > > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > > Reported-by: Mao Wenan <[email protected]>
> > > Signed-off-by: Jesper Dangaard Brouer <[email protected]>
> >
> > FWIW,
> >
> > Acked-by: Toshiaki Makita <[email protected]>
>
> Thanks.
>
> I have updated your email and added your ack in my patchset. I will
> submit this officially once net-next opens up again[1], as part my
> larger patchset for introducing XDP frame_sz.

It looks like bug fix to me.
The way I read it that behavior of bpf_xdp_adjust_head() is a bit
buggy with veth netdev,
so why wait ?

2020-04-03 08:01:43

by Jesper Dangaard Brouer

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On Thu, 2 Apr 2020 08:40:23 -0700
Alexei Starovoitov <[email protected]> wrote:

> On Thu, Apr 2, 2020 at 2:06 AM Jesper Dangaard Brouer <[email protected]> wrote:
> >
> > On Thu, 2 Apr 2020 09:47:03 +0900
> > Toshiaki Makita <[email protected]> wrote:
> >
> > > On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
> > > ...
> > > > [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> > > >
> > > > When native XDP redirect into a veth device, the frame arrives in the
> > > > xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> > > > which can run a new XDP bpf_prog on the packet. Doing so requires
> > > > converting xdp_frame to xdp_buff, but the tricky part is that
> > > > xdp_frame memory area is located in the top (data_hard_start) memory
> > > > area that xdp_buff will point into.
> > > >
> > > > The current code tried to protect the xdp_frame area, by assigning
> > > > xdp_buff.data_hard_start past this memory. This results in 32 bytes
> > > > less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> > > >
> > > > This protect step is actually not needed, because BPF-helper
> > > > bpf_xdp_adjust_head() already reserve this area, and don't allow
> > > > BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> > > > directly at xdp_frame memory area.
> > > >
> > > > Cc: Toshiaki Makita <[email protected]>
> > >
> > > FYI: This mail address is deprecated.
> > >
> > > > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > > > Reported-by: Mao Wenan <[email protected]>
> > > > Signed-off-by: Jesper Dangaard Brouer <[email protected]>
> > >
> > > FWIW,
> > >
> > > Acked-by: Toshiaki Makita <[email protected]>
> >
> > Thanks.
> >
> > I have updated your email and added your ack in my patchset. I will
> > submit this officially once net-next opens up again[1], as part my
> > larger patchset for introducing XDP frame_sz.
>
> It looks like bug fix to me.
> The way I read it that behavior of bpf_xdp_adjust_head() is a bit
> buggy with veth netdev,
> so why wait ?

I want to wait to ease your life as maintainer. This is part of a
larger patchset (for XDP frame_sz) and the next patch touch same code
path and thus depend on these code adjustments. If we apply them in
bpf vs bpf-next then you/we will have to handle merge conflicts. The
severity of the "fix" is really low, it only means 32 bytes less
headroom (which I doubt anyone is using).

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

2020-04-03 21:15:05

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH net v2] veth: xdp: use head instead of hard_start

On Fri, Apr 3, 2020 at 12:59 AM Jesper Dangaard Brouer
<[email protected]> wrote:
>
> I want to wait to ease your life as maintainer. This is part of a
> larger patchset (for XDP frame_sz) and the next patch touch same code
> path and thus depend on these code adjustments. If we apply them in
> bpf vs bpf-next then you/we will have to handle merge conflicts. The
> severity of the "fix" is really low, it only means 32 bytes less
> headroom (which I doubt anyone is using).

Ahh. Make sense. That type of fix can wait.
Thanks!