2010-06-02 14:55:56

by Ben McKeegan

[permalink] [raw]
Subject: Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link

Ben McKeegan wrote:
> Paul Mackerras wrote:
>
>>> I needed to do something similar a while back and I took a very
>>> different approach, which I think is more flexible. Rather than
>>> implement a new round-robin scheduler I simply introduced a target
>>> minimum fragment size into the fragment size calculation, as a per
>>> bundle parameter that can be configured via a new ioctl. This
>>> modifies the algorithm so that it tries to limit the number of
>>> fragments such that each fragment is at least the minimum size. If
>>> the minimum size is greater than the packet size it will not be
>>> fragmented all but will instead just get sent down the next
>>> available channel.
>>>
>>> A pppd plugin generates the ioctl call allowing this to be tweaked
>>> per connection. It is more flexible in that you can still have the
>>> larger packets fragmented if you wish.
>>
>> I like this a lot better than the other proposed patch. It adds less
>> code because it uses the fact that ppp_mp_explode() already has a
>> round-robin capability using the ppp->nxchan field, plus it provides a
>> way to control it per bundle via pppd.
>>
>> If you fix up the indentation issues (2-space indent in some of the
>> added code -- if you're using emacs, set c-basic-offset to 8), I'll
>> ack it and hopefully DaveM will pick it up.
>
> Okay, I'll fix it up when I'm back at work Tuesday and submit (today is
> a UK public holiday) and also dig out and fix up the userspace code to
> go with it.

Still working on this, updating the patch wasn't as trivial as I thought
as it clashed with Gabriele Paoloni's ppp_mp_explode redesign. However,
while looking at this code I believe I have found a bug which might have
been contributing to the poor performance the OP was experiencing. For
the case where channel speeds are unknown and there are more than 2
channels it would miscalculate the fragment sizes so they are not
balanced on the channels.

Patch for the bug to follow immediately.

Regards,
Ben.


2010-06-02 15:05:14

by Ben McKeegan

[permalink] [raw]
Subject: [PATCH] ppp_generic: fix multilink fragment sizes

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <[email protected]>
---
drivers/net/ppp_generic.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..8b36bfe 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,8 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
flen = len;
if (nfree > 0) {
if (pch->speed == 0) {
- flen = totlen/nfree;
+ if (nfree > 1)
+ flen = DIV_ROUND_UP(len, nfree);
if (nbigger > 0) {
flen++;
nbigger--;
--
1.5.6.5

2010-06-02 15:18:47

by Gabriele Paoloni

[permalink] [raw]
Subject: RE: [PATCH] ppp_generic: fix multilink fragment sizes

The proposed patch looks wrong to me.

nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

Regards

Gabriele Paoloni

-----Original Message-----
From: Ben McKeegan [mailto:[email protected]]
Sent: 02 June 2010 16:05
To: [email protected]
Cc: [email protected]; [email protected]; [email protected]; Paoloni, Gabriele; [email protected]; [email protected]; [email protected]
Subject: [PATCH] ppp_generic: fix multilink fragment sizes

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <[email protected]>
---
drivers/net/ppp_generic.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..8b36bfe 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,8 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
flen = len;
if (nfree > 0) {
if (pch->speed == 0) {
- flen = totlen/nfree;
+ if (nfree > 1)
+ flen = DIV_ROUND_UP(len, nfree);
if (nbigger > 0) {
flen++;
nbigger--;
--
1.5.6.5

--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.

2010-06-02 15:30:56

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] ppp_generic: fix multilink fragment sizes

From: "Paoloni, Gabriele" <[email protected]>
Date: Wed, 2 Jun 2010 16:17:59 +0100

> The proposed patch looks wrong to me.

Can you please reply to postings properly? Use the reply function
of your mail client and do not "top post."

Because of the way you just simply inlined the entirety of Ben's
patch posting, it ends up looking like a new patch being posted
on patchwork and this makes more work and analysis for me.

Thanks.

2010-06-02 15:55:53

by Ben McKeegan

[permalink] [raw]
Subject: Re: [PATCH] ppp_generic: fix multilink fragment sizes

Paoloni, Gabriele wrote:
> The proposed patch looks wrong to me.
>
> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

I was basing this on the original code prior to your patch, which used
DIV_ROUND_UP to get the fragment size. Looking more closely I see your
point, the original code was starting with the larger fragment size and
decrementing rather than starting with the smaller size and incrementing
as your code does, so that makes sense.


>
> flen = len;
> if (nfree > 0) {
> if (pch->speed == 0) {
> - flen = totlen/nfree;
> + if (nfree > 1)
> + flen = DIV_ROUND_UP(len, nfree);
> if (nbigger > 0) {
> flen++;
> nbigger--;

The important change here is the use of 'len' instead of 'totlen'.
'nfree' and 'len' should decrease roughly proportionally with each
iteration of the loop whereas 'totlen' remains unchanged. Thus
(totlen/nfree) gets bigger on each iteration whereas len/nfree should
give roughly the same. However, without rounding up here I'm not sure
the logic is right either, since the side effect of nbigger is to make
len decrease faster so it is not quite proportional to the decrease in
nfree. Is there a risk of ending up on the nfree == 1 iteration with
flen == len - 1 and thus generating a superfluous extra 1 byte long
fragment? This would be a far worse situation than a slight imbalance
in the size of the fragments.

Perhaps the solution is to go back to a precalculated fragment size for
the pch->speed == 0 case as per original code?

Regards,
Ben.

2010-06-03 08:42:28

by Gabriele Paoloni

[permalink] [raw]
Subject: RE: [PATCH] ppp_generic: fix multilink fragment sizes

Hi

I agree with you about replacing totlen with len (actually the previous one was quite bad).
I think we don't need to round up anyway and nbigger is doing his job I think. Basically we are giving just one more byte to the first nbigger free channels and for the rest the integer division will round down automatically.

For example say you have before transmitting 5 free channels and len is 83bytes

nbigger will be (ln 1284) 83%5=3

the frame will be split as follows: 17 - 17 - 17 - 16 - 16

Since for the first three iterations the channel to tx on will get an extra byte (ln1427) and nbigger will be decreased by one (ln1428).

The only change I would make to the code is to replace totlen with len @ln1425

Now if you agree either me or you can submit a new patch.

Regards

Gabriele Paoloni

>-----Original Message-----
>From: Ben McKeegan [mailto:[email protected]]
>Sent: 02 June 2010 16:56
>To: Paoloni, Gabriele
>Cc: [email protected]; [email protected]; linux-
>[email protected]; [email protected]; linux-
>[email protected]; [email protected]
>Subject: Re: [PATCH] ppp_generic: fix multilink fragment sizes
>
>Paoloni, Gabriele wrote:
>> The proposed patch looks wrong to me.
>>
>> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in
>general we don't have always to roundup, otherwise we would exceed the
>total bandwidth.
>
>I was basing this on the original code prior to your patch, which used
>DIV_ROUND_UP to get the fragment size. Looking more closely I see your
>point, the original code was starting with the larger fragment size and
>decrementing rather than starting with the smaller size and incrementing
>as your code does, so that makes sense.
>
>
>>
>> flen = len;
>> if (nfree > 0) {
>> if (pch->speed == 0) {
>> - flen = totlen/nfree;
>> + if (nfree > 1)
>> + flen = DIV_ROUND_UP(len, nfree);
>> if (nbigger > 0) {
>> flen++;
>> nbigger--;
>
>The important change here is the use of 'len' instead of 'totlen'.
>'nfree' and 'len' should decrease roughly proportionally with each
>iteration of the loop whereas 'totlen' remains unchanged. Thus
>(totlen/nfree) gets bigger on each iteration whereas len/nfree should
>give roughly the same. However, without rounding up here I'm not sure
>the logic is right either, since the side effect of nbigger is to make
>len decrease faster so it is not quite proportional to the decrease in
>nfree. Is there a risk of ending up on the nfree == 1 iteration with
>flen == len - 1 and thus generating a superfluous extra 1 byte long
>fragment? This would be a far worse situation than a slight imbalance
>in the size of the fragments.
>
>Perhaps the solution is to go back to a precalculated fragment size for
>the pch->speed == 0 case as per original code?
>
>Regards,
>Ben.
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.

2010-06-03 09:14:54

by Ben McKeegan

[permalink] [raw]
Subject: [PATCH] ppp_generic: fix multilink fragment sizes

Fix bug in multilink fragment size calculation introduced by
commit 9c705260feea6ae329bc6b6d5f6d2ef0227eda0a
"ppp: ppp_mp_explode() redesign"

Signed-off-by: Ben McKeegan <[email protected]>
---
drivers/net/ppp_generic.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c
index 0db3894..c980f74 100644
--- a/drivers/net/ppp_generic.c
+++ b/drivers/net/ppp_generic.c
@@ -1416,7 +1416,7 @@ static int ppp_mp_explode(struct ppp *ppp, struct sk_buff *skb)
flen = len;
if (nfree > 0) {
if (pch->speed == 0) {
- flen = totlen/nfree;
+ flen = len/nfree;
if (nbigger > 0) {
flen++;
nbigger--;
--
1.5.6.5

2010-11-08 14:05:54

by Richard Hartmann

[permalink] [raw]
Subject: Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link

On Wed, Jun 2, 2010 at 16:55, Ben McKeegan <[email protected]> wrote:

> Still working on this, updating the patch wasn't as trivial as I thought as
> it clashed with Gabriele Paoloni's ppp_mp_explode redesign.  However, while
> looking at this code I believe I have found a bug which might have been
> contributing to the poor performance the OP was experiencing.   For the case
> where channel speeds are unknown and there are more than 2 channels it would
> miscalculate the fragment sizes so they are not balanced on the channels.
>
> Patch for the bug to follow immediately.

Is there any update on this? It's been quite some time since you last updated
on this issue.


Thanks a lot,
Richard

2010-11-15 12:12:45

by Richard Hartmann

[permalink] [raw]
Subject: Re: [Patch] fix packet loss and massive ping spikes with PPP multi-link

On Mon, Nov 8, 2010 at 15:05, Richard Hartmann
<[email protected]> wrote:

> Is there any update on this? It's been quite some time since you last updated
> on this issue.

As it's been a week without any reply and as I know how stuff can
drown in more important work & projects, I am tentatively poking again
:)


RIchard