2014-12-01 08:55:49

by Stefan Bader

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize

On 11.08.2014 19:32, Zoltan Kiss wrote:
> There is a long known problem with the netfront/netback interface: if the guest
> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> it gets dropped. The reason is that netback maps these slots to a frag in the
> frags array, which is limited by size. Having so many slots can occur since
> compound pages were introduced, as the ring protocol slice them up into
> individual (non-compound) page aligned slots. The theoretical worst case
> scenario looks like this (note, skbs are limited to 64 Kb here):
> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> using 2 slots
> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> Although I don't think this 51 slots skb can really happen, we need a solution
> which can deal with every scenario. In real life there is only a few slots
> overdue, but usually it causes the TCP stream to be blocked, as the retry will
> most likely have the same buffer layout.
> This patch solves this problem by linearizing the packet. This is not the
> fastest way, and it can fail much easier as it tries to allocate a big linear
> area for the whole packet, but probably easier by an order of magnitude than
> anything else. Probably this code path is not touched very frequently anyway.
>
> Signed-off-by: Zoltan Kiss <[email protected]>
> Cc: Wei Liu <[email protected]>
> Cc: Ian Campbell <[email protected]>
> Cc: Paul Durrant <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]

This does not seem to be marked explicitly as stable. Has someone already asked
David Miller to put it on his stable queue? IMO it qualifies quite well and the
actual change should be simple to pick/backport.

-Stefan

>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 055222b..23359ae 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -628,9 +628,10 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
> slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
> xennet_count_skb_frag_slots(skb);
> if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
> - net_alert_ratelimited(
> - "xennet: skb rides the rocket: %d slots\n", slots);
> - goto drop;
> + net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
> + slots, skb->len);
> + if (skb_linearize(skb))
> + goto drop;
> }
>
> spin_lock_irqsave(&queue->tx_lock, flags);
>
> _______________________________________________
> Xen-devel mailing list
> [email protected]
> http://lists.xen.org/xen-devel
>



Attachments:
signature.asc (819.00 B)
OpenPGP digital signature

2014-12-01 13:37:01

by David Vrabel

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize

On 01/12/14 08:55, Stefan Bader wrote:
> On 11.08.2014 19:32, Zoltan Kiss wrote:
>> There is a long known problem with the netfront/netback interface: if the guest
>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>> it gets dropped. The reason is that netback maps these slots to a frag in the
>> frags array, which is limited by size. Having so many slots can occur since
>> compound pages were introduced, as the ring protocol slice them up into
>> individual (non-compound) page aligned slots. The theoretical worst case
>> scenario looks like this (note, skbs are limited to 64 Kb here):
>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>> using 2 slots
>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>> Although I don't think this 51 slots skb can really happen, we need a solution
>> which can deal with every scenario. In real life there is only a few slots
>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>> most likely have the same buffer layout.
>> This patch solves this problem by linearizing the packet. This is not the
>> fastest way, and it can fail much easier as it tries to allocate a big linear
>> area for the whole packet, but probably easier by an order of magnitude than
>> anything else. Probably this code path is not touched very frequently anyway.
>>
>> Signed-off-by: Zoltan Kiss <[email protected]>
>> Cc: Wei Liu <[email protected]>
>> Cc: Ian Campbell <[email protected]>
>> Cc: Paul Durrant <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Cc: [email protected]
>
> This does not seem to be marked explicitly as stable. Has someone already asked
> David Miller to put it on his stable queue? IMO it qualifies quite well and the
> actual change should be simple to pick/backport.

I think it's a candidate, yes.

Can you expand on the user visible impact of the bug this patch fixes?
I think it results in certain types of traffic not working (because the
domU always generates skb's with the problematic frag layout), but I
can't remember the details.

David

2014-12-01 13:59:18

by Zoltan Kiss

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize



On 01/12/14 13:36, David Vrabel wrote:
> On 01/12/14 08:55, Stefan Bader wrote:
>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>> There is a long known problem with the netfront/netback interface: if the guest
>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>> frags array, which is limited by size. Having so many slots can occur since
>>> compound pages were introduced, as the ring protocol slice them up into
>>> individual (non-compound) page aligned slots. The theoretical worst case
>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>> using 2 slots
>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>> which can deal with every scenario. In real life there is only a few slots
>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>> most likely have the same buffer layout.
>>> This patch solves this problem by linearizing the packet. This is not the
>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>> area for the whole packet, but probably easier by an order of magnitude than
>>> anything else. Probably this code path is not touched very frequently anyway.
>>>
>>> Signed-off-by: Zoltan Kiss <[email protected]>
>>> Cc: Wei Liu <[email protected]>
>>> Cc: Ian Campbell <[email protected]>
>>> Cc: Paul Durrant <[email protected]>
>>> Cc: [email protected]
>>> Cc: [email protected]
>>> Cc: [email protected]
>>
>> This does not seem to be marked explicitly as stable. Has someone already asked
>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>> actual change should be simple to pick/backport.
>
> I think it's a candidate, yes.
>
> Can you expand on the user visible impact of the bug this patch fixes?
> I think it results in certain types of traffic not working (because the
> domU always generates skb's with the problematic frag layout), but I
> can't remember the details.

Yes, this line in the comment talks about it: "In real life there is
only a few slots overdue, but usually it causes the TCP stream to be
blocked, as the retry will most likely have the same buffer layout."
Maybe we can add what kind of traffic triggered this so far, AFAIK NFS
was one of them, and Stefan had an another use case. But my memories are
blur about this.

Zoli

2014-12-01 14:13:45

by Stefan Bader

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize

On 01.12.2014 14:59, Zoltan Kiss wrote:
>
>
> On 01/12/14 13:36, David Vrabel wrote:
>> On 01/12/14 08:55, Stefan Bader wrote:
>>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>>> There is a long known problem with the netfront/netback interface: if the guest
>>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>>> frags array, which is limited by size. Having so many slots can occur since
>>>> compound pages were introduced, as the ring protocol slice them up into
>>>> individual (non-compound) page aligned slots. The theoretical worst case
>>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>>> using 2 slots
>>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>>> which can deal with every scenario. In real life there is only a few slots
>>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>>> most likely have the same buffer layout.
>>>> This patch solves this problem by linearizing the packet. This is not the
>>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>>> area for the whole packet, but probably easier by an order of magnitude than
>>>> anything else. Probably this code path is not touched very frequently anyway.
>>>>
>>>> Signed-off-by: Zoltan Kiss <[email protected]>
>>>> Cc: Wei Liu <[email protected]>
>>>> Cc: Ian Campbell <[email protected]>
>>>> Cc: Paul Durrant <[email protected]>
>>>> Cc: [email protected]
>>>> Cc: [email protected]
>>>> Cc: [email protected]
>>>
>>> This does not seem to be marked explicitly as stable. Has someone already asked
>>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>>> actual change should be simple to pick/backport.
>>
>> I think it's a candidate, yes.
>>
>> Can you expand on the user visible impact of the bug this patch fixes?
>> I think it results in certain types of traffic not working (because the
>> domU always generates skb's with the problematic frag layout), but I
>> can't remember the details.
>
> Yes, this line in the comment talks about it: "In real life there is only a few
> slots overdue, but usually it causes the TCP stream to be blocked, as the retry
> will most likely have the same buffer layout."
> Maybe we can add what kind of traffic triggered this so far, AFAIK NFS was one
> of them, and Stefan had an another use case. But my memories are blur about this.

We had some report about some web-app hitting packet losses. I suspect that also
was streaming something. For a easy trigger we found redis-benchmark (part of
the redis keyserver) with a larger (iirc 1kB) payload would trigger the
fragmentation/exceeding pages to happen. Though I think it did not fail but
showed a performance drop instead (from memory which also suffers from loosing
detail).

-Stefan
>
> Zoli



Attachments:
signature.asc (819.00 B)
OpenPGP digital signature

2014-12-08 10:19:44

by Luis Henriques

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize

On Mon, Dec 01, 2014 at 09:55:24AM +0100, Stefan Bader wrote:
> On 11.08.2014 19:32, Zoltan Kiss wrote:
> > There is a long known problem with the netfront/netback interface: if the guest
> > tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> > it gets dropped. The reason is that netback maps these slots to a frag in the
> > frags array, which is limited by size. Having so many slots can occur since
> > compound pages were introduced, as the ring protocol slice them up into
> > individual (non-compound) page aligned slots. The theoretical worst case
> > scenario looks like this (note, skbs are limited to 64 Kb here):
> > linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> > using 2 slots
> > first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> > end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> > last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> > Although I don't think this 51 slots skb can really happen, we need a solution
> > which can deal with every scenario. In real life there is only a few slots
> > overdue, but usually it causes the TCP stream to be blocked, as the retry will
> > most likely have the same buffer layout.
> > This patch solves this problem by linearizing the packet. This is not the
> > fastest way, and it can fail much easier as it tries to allocate a big linear
> > area for the whole packet, but probably easier by an order of magnitude than
> > anything else. Probably this code path is not touched very frequently anyway.
> >
> > Signed-off-by: Zoltan Kiss <[email protected]>
> > Cc: Wei Liu <[email protected]>
> > Cc: Ian Campbell <[email protected]>
> > Cc: Paul Durrant <[email protected]>
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
>
> This does not seem to be marked explicitly as stable. Has someone already asked
> David Miller to put it on his stable queue? IMO it qualifies quite well and the
> actual change should be simple to pick/backport.
>

Thank you Stefan, I'm queuing this for the next 3.16 kernel release.

Cheers,
--
Lu?s

> -Stefan
>
> >
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index 055222b..23359ae 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -628,9 +628,10 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
> > xennet_count_skb_frag_slots(skb);
> > if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
> > - net_alert_ratelimited(
> > - "xennet: skb rides the rocket: %d slots\n", slots);
> > - goto drop;
> > + net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
> > + slots, skb->len);
> > + if (skb_linearize(skb))
> > + goto drop;
> > }
> >
> > spin_lock_irqsave(&queue->tx_lock, flags);
> >
> > _______________________________________________
> > Xen-devel mailing list
> > [email protected]
> > http://lists.xen.org/xen-devel
> >
>
>

2014-12-08 11:11:20

by David Vrabel

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize

On 08/12/14 10:19, Luis Henriques wrote:
> On Mon, Dec 01, 2014 at 09:55:24AM +0100, Stefan Bader wrote:
>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>> There is a long known problem with the netfront/netback interface: if the guest
>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>> frags array, which is limited by size. Having so many slots can occur since
>>> compound pages were introduced, as the ring protocol slice them up into
>>> individual (non-compound) page aligned slots. The theoretical worst case
>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>> using 2 slots
>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>> which can deal with every scenario. In real life there is only a few slots
>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>> most likely have the same buffer layout.
>>> This patch solves this problem by linearizing the packet. This is not the
>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>> area for the whole packet, but probably easier by an order of magnitude than
>>> anything else. Probably this code path is not touched very frequently anyway.
>>>
>>> Signed-off-by: Zoltan Kiss <[email protected]>
>>> Cc: Wei Liu <[email protected]>
>>> Cc: Ian Campbell <[email protected]>
>>> Cc: Paul Durrant <[email protected]>
>>> Cc: [email protected]
>>> Cc: [email protected]
>>> Cc: [email protected]
>>
>> This does not seem to be marked explicitly as stable. Has someone already asked
>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>> actual change should be simple to pick/backport.
>>
>
> Thank you Stefan, I'm queuing this for the next 3.16 kernel release.

Don't backport this yes. It's broken. It produces malformed requests
and netback will report a fatal error and stop all traffic on the VIF.

David

2014-12-09 09:55:01

by Luis Henriques

[permalink] [raw]
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize

On Mon, Dec 08, 2014 at 11:11:15AM +0000, David Vrabel wrote:
> On 08/12/14 10:19, Luis Henriques wrote:
> > On Mon, Dec 01, 2014 at 09:55:24AM +0100, Stefan Bader wrote:
> >> On 11.08.2014 19:32, Zoltan Kiss wrote:
> >>> There is a long known problem with the netfront/netback interface: if the guest
> >>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> >>> it gets dropped. The reason is that netback maps these slots to a frag in the
> >>> frags array, which is limited by size. Having so many slots can occur since
> >>> compound pages were introduced, as the ring protocol slice them up into
> >>> individual (non-compound) page aligned slots. The theoretical worst case
> >>> scenario looks like this (note, skbs are limited to 64 Kb here):
> >>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> >>> using 2 slots
> >>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> >>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> >>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> >>> Although I don't think this 51 slots skb can really happen, we need a solution
> >>> which can deal with every scenario. In real life there is only a few slots
> >>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
> >>> most likely have the same buffer layout.
> >>> This patch solves this problem by linearizing the packet. This is not the
> >>> fastest way, and it can fail much easier as it tries to allocate a big linear
> >>> area for the whole packet, but probably easier by an order of magnitude than
> >>> anything else. Probably this code path is not touched very frequently anyway.
> >>>
> >>> Signed-off-by: Zoltan Kiss <[email protected]>
> >>> Cc: Wei Liu <[email protected]>
> >>> Cc: Ian Campbell <[email protected]>
> >>> Cc: Paul Durrant <[email protected]>
> >>> Cc: [email protected]
> >>> Cc: [email protected]
> >>> Cc: [email protected]
> >>
> >> This does not seem to be marked explicitly as stable. Has someone already asked
> >> David Miller to put it on his stable queue? IMO it qualifies quite well and the
> >> actual change should be simple to pick/backport.
> >>
> >
> > Thank you Stefan, I'm queuing this for the next 3.16 kernel release.
>
> Don't backport this yes. It's broken. It produces malformed requests
> and netback will report a fatal error and stop all traffic on the VIF.
>
> David

Ok, thank you. I've dropped it already.

Cheers,
--
Lu?s