2002-03-04 22:42:21

by Jean Tourrilhes

[permalink] [raw]
Subject: PPP feature request (Tx queue len + close)

Hi,

While working with IrNET, I came across one problems that
would require some changes minor to the ppp_generic kernel code. I
will describe this feature and then we can start to flame each other
or discuss how to implement them.

IrNET is PPP over an IrDA socket. A good analogy would be PPP
over TCP/IP. If you thing in those terms, you will get the proper
context. IrNET is a PPP driver hooking directly in ppp_generic.

Tx queue length
---------------
Problem : IrDA does its buffering (IrTTP is a sliding window
protocol). PPP does its buffering (1 packet in ppp_generic +
dev->tx_queue_len = 3). End result : a large number of packets queued
for transmissions, which result in some network performance issues.

Solution : could we allow the PPP channel to overwrite
dev->tx_queue_len ?
This is similar to the channel beeing able to set the MTUs and
other parameters...

Have fun...

Jean


2002-03-05 00:58:56

by James Stevenson

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

> Tx queue length
> ---------------
> Problem : IrDA does its buffering (IrTTP is a sliding window
> protocol). PPP does its buffering (1 packet in ppp_generic +
> dev->tx_queue_len = 3). End result : a large number of packets queued
> for transmissions, which result in some network performance issues.
>
> Solution : could we allow the PPP channel to overwrite
> dev->tx_queue_len ?
> This is similar to the channel beeing able to set the MTUs and
> other parameters...

somebody please correct me if i am wrong but if the
txqueuelen not set from userspace from
ifconfig ?


2002-03-05 01:01:24

by Jean Tourrilhes

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Tue, Mar 05, 2002 at 12:55:51AM -0000, James Stevenson wrote:
> > Tx queue length
> > ---------------
> > Problem : IrDA does its buffering (IrTTP is a sliding window
> > protocol). PPP does its buffering (1 packet in ppp_generic +
> > dev->tx_queue_len = 3). End result : a large number of packets queued
> > for transmissions, which result in some network performance issues.
> >
> > Solution : could we allow the PPP channel to overwrite
> > dev->tx_queue_len ?
> > This is similar to the channel beeing able to set the MTUs and
> > other parameters...
>
> somebody please correct me if i am wrong but if the
> txqueuelen not set from userspace from
> ifconfig ?

linux/drivers/net/ppp_generic.c, line 888, ppp_net_init()
------------------------
dev->tx_queue_len = 3;
------------------------

Jean

2002-03-05 03:07:21

by Paul Mackerras

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

Jean Tourrilhes writes:

> Problem : IrDA does its buffering (IrTTP is a sliding window
> protocol). PPP does its buffering (1 packet in ppp_generic +
> dev->tx_queue_len = 3). End result : a large number of packets queued
> for transmissions, which result in some network performance issues.

How much buffering does IrTTP do? How large is its window? It is
much more critical IMO to reduce the buffering below ppp_generic than
it is to reduce the buffering above it. The ppp_generic layer itself
does as little buffering as possible.

> Solution : could we allow the PPP channel to overwrite
> dev->tx_queue_len ?
> This is similar to the channel beeing able to set the MTUs and
> other parameters...

Not really, the channel can't set the bundle MTU, only its own MTU.
It can set the header length (the desired amount of headroom) but that
is really only an optimization.

What would happen in the case where two channels connected to the
same ppp unit want to set the queue length to two different values?

In general I think it would be better to get pppd to set the transmit
queue length than to have the channel magically influencing stuff two
levels above it.

Could you produce some numbers showing better throughput, fewer
retransmissions, or whatever, with a smaller transmit queue length?

Paul.

2002-03-05 03:20:54

by Jean Tourrilhes

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Tue, Mar 05, 2002 at 02:05:00PM +1100, Paul Mackerras wrote:
> Jean Tourrilhes writes:
>
> > Problem : IrDA does its buffering (IrTTP is a sliding window
> > protocol). PPP does its buffering (1 packet in ppp_generic +
> > dev->tx_queue_len = 3). End result : a large number of packets queued
> > for transmissions, which result in some network performance issues.
>
> How much buffering does IrTTP do? How large is its window? It is
> much more critical IMO to reduce the buffering below ppp_generic than
> it is to reduce the buffering above it. The ppp_generic layer itself
> does as little buffering as possible.

IrTTP is another problem. If I were to use TCP instead of
IrTTP, would you still ask me to reduce the window size of TCP ? Let's
try to be fair...
I'm taking the approach that every little thing helps. There
is a trivial win in PPP, and I would be stupid to not exploit it.
On the other hand, you are right that with IrTTP. I was
spending the day investigating this issue. As usual with Linux-IrDA,
it's very messy. I think I will need some architecture change to
implement proper flow control between IrLAP and IrTTP. And then
qualify that with all IrTTP users :-(

> > Solution : could we allow the PPP channel to overwrite
> > dev->tx_queue_len ?
> > This is similar to the channel beeing able to set the MTUs and
> > other parameters...
>
> Not really, the channel can't set the bundle MTU, only its own MTU.
> It can set the header length (the desired amount of headroom) but that
> is really only an optimization.
>
> What would happen in the case where two channels connected to the
> same ppp unit want to set the queue length to two different values?

No idea, never had this case ;-) This is exactly for this
reason I ask you.

> In general I think it would be better to get pppd to set the transmit
> queue length than to have the channel magically influencing stuff two
> levels above it.

I must have missed this option. I'll look again in the pppd
man page. That may be good enough...
For stuff influencing level above, just think of
ap->chan.hdrlen. In my case, it goes from IrLAP to TCP via IrLMP,
IrTTP, IrNET, PPP and IP.

> Could you produce some numbers showing better throughput, fewer
> retransmissions, or whatever, with a smaller transmit queue length?

Don't have number, but I don't need number to know that.

> Paul.

Jean

2002-03-05 05:46:07

by Paul Mackerras

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

Jean Tourrilhes writes:

> IrTTP is another problem. If I were to use TCP instead of
> IrTTP, would you still ask me to reduce the window size of TCP ? Let's

Yes, absolutely. :) It just takes an ioctl to do that for TCP.

> try to be fair...
> I'm taking the approach that every little thing helps. There
> is a trivial win in PPP, and I would be stupid to not exploit it.

Given that the default queue length is only 3 packets for PPP, it
seems to me to be a very minor win. I don't think we could reduce it
below 1 packet, and I'm not sure whether that would have other
negative consequences. This is one reason why I asked if you had
tried it.

> I must have missed this option. I'll look again in the pppd
> man page. That may be good enough...

It doesn't exist at the moment, but it would be easy enough to add
it. In the short term, you could even add an ifconfig to your
/etc/ppp/ip-up script to set the transmit queue length there.

> > Could you produce some numbers showing better throughput, fewer
> > retransmissions, or whatever, with a smaller transmit queue length?
>
> Don't have number, but I don't need number to know that.

Your case for wanting something done will be so much stronger if you
show that there is a measurable benefit as opposed to just a gut
feeling. :)

My gut feeling is that the transmit queue length is already about as
short as we want it, and that if we make it any shorter then we will
start dropping a lot of packets at the transmit queue, and lose
performance because of that. But I could be wrong - any networking
gurus care to comment?

Paul.

2002-03-05 17:45:58

by Jean Tourrilhes

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Tue, Mar 05, 2002 at 04:20:49PM +1100, Paul Mackerras wrote:
> Jean Tourrilhes writes:
>
> > IrTTP is another problem. If I were to use TCP instead of
> > IrTTP, would you still ask me to reduce the window size of TCP ? Let's
>
> Yes, absolutely. :) It just takes an ioctl to do that for TCP.

Up to a certain point. If you reduce TCP to only one buffer, I
don't think it will work properly.
I told you, the IrDA queues are totally under my control, so I
can fix them when I need, as opposed to PPP... What bugs me is that
each layer is having reasonably sized queues in itself, and that the
problem is just when we add those layers together...

> > I'm taking the approach that every little thing helps. There
> > is a trivial win in PPP, and I would be stupid to not exploit it.
>
> Given that the default queue length is only 3 packets for PPP, it
> seems to me to be a very minor win. I don't think we could reduce it
> below 1 packet, and I'm not sure whether that would have other
> negative consequences. This is one reason why I asked if you had
> tried it.

No, I didn't tried it because it was not obvious how to do it.

> It doesn't exist at the moment, but it would be easy enough to add
> it. In the short term, you could even add an ifconfig to your
> /etc/ppp/ip-up script to set the transmit queue length there.

Will try that.
Actually, this is why I ask you in advance, so that we have
the time to think about it without rushing...

> > > Could you produce some numbers showing better throughput, fewer
> > > retransmissions, or whatever, with a smaller transmit queue length?
> >
> > Don't have number, but I don't need number to know that.
>
> Your case for wanting something done will be so much stronger if you
> show that there is a measurable benefit as opposed to just a gut
> feeling. :)

Well, it's pretty obvious when watching tcpdump. You see all
the Tx clustered together and then nothing get Tx until the TCP window
opens again. In other word, you have a full TCP window queued in PPP
and IrDA.
This is pretty bad for latency. Actually, you can verify that
by doing ping while a TCP connection is active, you will see huge
roundtrips.

> My gut feeling is that the transmit queue length is already about as
> short as we want it, and that if we make it any shorter then we will
> start dropping a lot of packets at the transmit queue, and lose
> performance because of that. But I could be wrong - any networking
> gurus care to comment?

I believe that you won't drop packet, but just flow control
TCP (which in turn will flow control the application). At least, this
is the way it's happening within the IrDA stack.

> Paul.

Have fun...

Jean

2002-03-05 18:14:07

by Max Krasnyansky

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

Hi folks,

> > It doesn't exist at the moment, but it would be easy enough to add
> > it. In the short term, you could even add an ifconfig to your
> > /etc/ppp/ip-up script to set the transmit queue length there.
>
> Will try that.
> Actually, this is why I ask you in advance, so that we have
>the time to think about it without rushing...
That's what exactly what I do to increase txqueue length.
ifconfig $1 txqueuelen 20

> > > > Could you produce some numbers showing better throughput, fewer
> > > > retransmissions, or whatever, with a smaller transmit queue length?
> > >
> > > Don't have number, but I don't need number to know that.
> >
> > Your case for wanting something done will be so much stronger if you
> > show that there is a measurable benefit as opposed to just a gut
> feeling. :)
>
> Well, it's pretty obvious when watching tcpdump. You see all
>the Tx clustered together and then nothing get Tx until the TCP window
>opens again. In other word, you have a full TCP window queued in PPP
>and IrDA.
> This is pretty bad for latency. Actually, you can verify that
>by doing ping while a TCP connection is active, you will see huge
>roundtrips.
Setting txqueuelen to 1 will pretty much kill TCP performance as soon as
window will grow
to more than 1 segment. Because net layer just drops packet if txqueue is
full and TCP will
have to re-transmit. Don't trust your gut feeling ;-).
I did some experiments with PPP over HDR links here in Qualcomm. And I had
to increase
queue just because stuff was dropped even before it reaches serial driver.
I can even claim
that for todays fast links like PPPoE, PPPoATM, HDR, etc txqueuelen == 3 is
way to small.

btw You might want to use tcptrace. Watching tcpdump on the fly is no fun.
tcptrace will
give you nice graphs of what happens and when.

> > My gut feeling is that the transmit queue length is already about as
> > short as we want it, and that if we make it any shorter then we will
> > start dropping a lot of packets at the transmit queue, and lose
> > performance because of that. But I could be wrong - any networking
> > gurus care to comment ?
>
> I believe that you won't drop packet, but just flow control
>TCP (which in turn will flow control the application). At least, this
>is the way it's happening within the IrDA stack.
You _will_ drop it, if txqueue is full. TCP will back off and re-transmit
but this will not allow TCP window to grow and you TCP performance will
be pretty bad.

I totally agree with Paul. Just decrease buffering below PPP.

Max

2002-03-05 18:29:06

by Jean Tourrilhes

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Tue, Mar 05, 2002 at 10:13:28AM -0800, Maksim Krasnyanskiy wrote:
>
> You _will_ drop it, if txqueue is full. TCP will back off and re-transmit
> but this will not allow TCP window to grow and you TCP performance will
> be pretty bad.

Ok, I didn't look at the network code, so I have to take your
word for it. I would have assumed that the logical thing would be to
flow control within the network stack (like it's done in IrDA), but it
seem that I was wrong.

> I totally agree with Paul. Just decrease buffering below PPP.

If what you say is true, I should *increase* the buffering
below PPP to make sure that packet don't get dropped above PPP.
Think about it : for TCP, it doesn't matter if buffers are
above or below PPP, what matter is only how many there are. TCP can't
make the difference between buffers at the PPP and at IrDA level.
Actually, it's probably better to keep the buffers as low as
possible in the stack, because less processing remain to be done on
them before beeing transmitted.

> Max

Jean

2002-03-05 19:16:47

by James Carlson

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

Jean Tourrilhes writes:
> If what you say is true, I should *increase* the buffering
> below PPP to make sure that packet don't get dropped above PPP.

No. Decreasing the buffering below PPP is the right path. In
general, if you have link-layer ARQ, you need to have the time
constant be *much* shorter than any RTT estimate that TCP is likely to
see, or you get oscillatory behavior out of TCP.

Running one retransmit-based reliable protocol atop another is usually
a recipe for disaster (as you've found; as others have found by trying
to run PPP over TELNET over the general Internet).

The transport layer (most often TCP) assumes that the network layer
(IP) has minimal (and slowly varying) latency, but is lossy, and thus
that it has minimal buffering and little error control. Anything that
you do that breaks these assumptions is probably the wrong thing to
do. Think "packets" not "streams" below PPP.

http://www.ietf.org/internet-drafts/draft-ietf-pilc-link-arq-issues-03.txt
http://www.ietf.org/rfc/rfc3150.txt
http://www.ietf.org/rfc/rfc3155.txt

--
James Carlson <[email protected]>

2002-03-05 19:27:48

by Jean Tourrilhes

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Tue, Mar 05, 2002 at 02:15:59PM -0500, James Carlson wrote:
> Jean Tourrilhes writes:
> > If what you say is true, I should *increase* the buffering
> > below PPP to make sure that packet don't get dropped above PPP.
>
> No. Decreasing the buffering below PPP is the right path.

Yes, that's what I want to do it. But with regards to TCP,
there is no difference if packets are buffered within PPP or below
PPP. So, reducing buffering in PPP is also a win.

> In
> general, if you have link-layer ARQ, you need to have the time
> constant be *much* shorter than any RTT estimate that TCP is likely to
> see, or you get oscillatory behavior out of TCP.

Yep.

> Running one retransmit-based reliable protocol atop another is usually
> a recipe for disaster (as you've found; as others have found by trying
> to run PPP over TELNET over the general Internet).

Not true. It all depend of the timeframe of those
retransmissions, and how they are triggered. That's why TCP works
properly on 802.11b. Of course, this assume that the link
retransmissions are designed properly.

> The transport layer (most often TCP) assumes that the network layer
> (IP) has minimal (and slowly varying) latency, but is lossy, and thus
> that it has minimal buffering and little error control.

Not true. Try running TCP on links with 20% packet losses.
Also, any ethernet driver flow control the stack through
netif_stop/start_queue() to avoid local overruns.

> Anything that
> you do that breaks these assumptions is probably the wrong thing to
> do. Think "packets" not "streams" below PPP.
>
> http://www.ietf.org/internet-drafts/draft-ietf-pilc-link-arq-issues-03.txt
> http://www.ietf.org/rfc/rfc3150.txt
> http://www.ietf.org/rfc/rfc3155.txt

Already read those. Guess what, my name is event in the
acknowledgments ! How bizzare ;-)

> James Carlson

Jean

2002-03-05 19:40:27

by Max Krasnyansky

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)


> > You _will_ drop it, if txqueue is full. TCP will back off and re-transmit
> > but this will not allow TCP window to grow and you TCP performance will
> > be pretty bad.
>
> Ok, I didn't look at the network code, so I have to take your
>word for it. I would have assumed that the logical thing would be to
>flow control within the network stack (like it's done in IrDA), but it
>seem that I was wrong.
I looked at the code again and tried to trace TCP xmit path. Seems like it
should not
back off because it does check return status of the dev_queue_xmit (send
packet to the driver).
But it does not seem to retry either. Looks like it's just waits for an ack
from the other side which
effectively makes your window equal to 1 segment. In any case small PPP
queue won't make any
good for you.

> > I totally agree with Paul. Just decrease buffering below PPP.
>
> If what you say is true, I should *increase* the buffering
>below PPP to make sure that packet don't get dropped above PPP.
I was under assumption that you know for sure that buffering is bad for you :)

> Think about it : for TCP, it doesn't matter if buffers are
>above or below PPP, what matter is only how many there are. TCP can't
>make the difference between buffers at the PPP and at IrDA level.
> Actually, it's probably better to keep the buffers as low as
>possible in the stack, because less processing remain to be done on
>them before beeing transmitted.
All this depends on what you want to achieve. If you're looking for max TCP
performance. I'd recommend to use tcptrace and see what actually is going on.
May be your RTT is to high and you need bigger windows or may be there is
something else.

Max

2002-03-05 20:24:10

by Jean Tourrilhes

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Tue, Mar 05, 2002 at 11:39:59AM -0800, Maksim Krasnyanskiy wrote:
>
> I looked at the code again and tried to trace TCP xmit path. Seems like it
> should not
> back off because it does check return status of the dev_queue_xmit (send
> packet to the driver).

Ok. That sounds much more logical than the first explanation.

> But it does not seem to retry either. Looks like it's just waits for an ack
> from the other side which

Ok. The mysteries of TCP/IP implementation.

> effectively makes your window equal to 1 segment. In any case small PPP
> queue won't make any
> good for you.

Nope. Remember that I have buffers below PPP. The transmit
path within PPP and IrNET is minimal (no framing), so buffers in PPP
and below PPP are logically equivalent.

> > If what you say is true, I should *increase* the buffering
> >below PPP to make sure that packet don't get dropped above PPP.
> I was under assumption that you know for sure that buffering is bad for you :)

We are running circles. I want to reduce the amount of buffers
below TCP. This includes PPP buffers and buffers below PPP (both are
logically equivalent).
Both of you are saying "increase buffers at PPP level and
reduce below TCP", but this doesn't make sense, and that's what I was
pointing out. You have to think on the whole stack, not each
individual component.

> > Think about it : for TCP, it doesn't matter if buffers are
> >above or below PPP, what matter is only how many there are. TCP can't
> >make the difference between buffers at the PPP and at IrDA level.
> > Actually, it's probably better to keep the buffers as low as
> >possible in the stack, because less processing remain to be done on
> >them before beeing transmitted.
>
> All this depends on what you want to achieve. If you're looking for max TCP
> performance. I'd recommend to use tcptrace and see what actually is going on.
> May be your RTT is to high and you need bigger windows or may be there is
> something else.

I get 3.2 Mb/s TCP throughput over a 4Mb/s IrDA link layer, so
I'm not concernet with max performance. My question is more "how much
buffers can I trim without impacting performance". The goal is to
improve latency and decrease ressource consumption.

> Max

Regards,

Jean

2002-03-05 21:18:34

by Max Krasnyansky

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)


> > effectively makes your window equal to 1 segment. In any case small PPP
> > queue won't make any good for you.
>
> Nope. Remember that I have buffers below PPP. The transmit
>path within PPP and IrNET is minimal (no framing), so buffers in PPP
>and below PPP are logically equivalent.
True. PPP is a sort of "pass through" thing in your case.

> > I was under assumption that you know for sure that buffering is bad for
> you :)
>
> We are running circles. I want to reduce the amount of buffers
>below TCP. This includes PPP buffers and buffers below PPP (both are
>logically equivalent).
> Both of you are saying "increase buffers at PPP level and
>reduce below TCP", but this doesn't make sense, and that's what I was
>pointing out. You have to think on the whole stack, not each
>individual component.
Yes. I see your point. It doesn't really make any difference which layer
buffers stuff (unless that layer introduces delays). So I guess in your case
you can just set txqueuelen to 1 if you're sure that underlying layer has long
enough queues.

> > All this depends on what you want to achieve. If you're looking for max TCP
> > performance. I'd recommend to use tcptrace and see what actually is
> going on.
> > May be your RTT is to high and you need bigger windows or may be there is
> > something else.
>
> I get 3.2 Mb/s TCP throughput over a 4Mb/s IrDA link layer, so
>I'm not concernet with max performance. My question is more "how much
>buffers can I trim without impacting performance". The goal is to
>improve latency and decrease ressource consumption.
I see.
Did you try ifconfig txqueuelen 1 ?

Max

2002-03-05 21:28:01

by Jean Tourrilhes

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Tue, Mar 05, 2002 at 01:17:42PM -0800, Maksim Krasnyanskiy wrote:
>
> True. PPP is a sort of "pass through" thing in your case.

Not fully. In some cases, PPP does compression (and
potentially encryption), and at 4 Mb/s those operations can become
close to bottlenecks (at least, on my slow boxes). But at least, it's
constant latency, as opposed to IrDA latencies.

> Yes. I see your point. It doesn't really make any difference which layer
> buffers stuff (unless that layer introduces delays). So I guess in your case
> you can just set txqueuelen to 1 if you're sure that underlying layer has long
> enough queues.

By the way, same logic applies to PAN. In PAN it's easier,
because as PAN is a pseudo Ethernet driver you can fudge tx_queue_len
directly.

> I see.
> Did you try ifconfig txqueuelen 1 ?

Not yet. I'm finishing the current batch of IrDA fixes, and
other backlog of Wireless patches. I would also need to squeeze
buffers out of IrDA queues.

> Max

Thanks, have fun...

Jean

2002-03-05 21:49:53

by James Carlson

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

Jean Tourrilhes writes:
> > No. Decreasing the buffering below PPP is the right path.
>
> Yes, that's what I want to do it. But with regards to TCP,
> there is no difference if packets are buffered within PPP or below
> PPP. So, reducing buffering in PPP is also a win.

True. Actually, except for MP reassembly, there should be *no*
buffering in PPP at all. If there is on your platform (I'm much more
familiar with Solaris than with Linux), then that's certainly an odd
design problem.

I've seen the deep buffering problem before in other contexts: older
BSD stacks had all output ifq_len values set to 50, even if the links
were really slow (<9600bps), and this often triggered retransmits.
TCP congestion detection *depends* on packet loss. Loss is a good
thing.

> > Running one retransmit-based reliable protocol atop another is usually
> > a recipe for disaster (as you've found; as others have found by trying
> > to run PPP over TELNET over the general Internet).
>
> Not true. It all depend of the timeframe of those
> retransmissions, and how they are triggered. That's why TCP works
> properly on 802.11b. Of course, this assume that the link
> retransmissions are designed properly.

That's still exactly what I said in that message, just restated a
different way:

In general, if you have link-layer ARQ, you need to have the
time constant be *much* shorter than any RTT estimate that TCP
is likely to see, or you get oscillatory behavior out of TCP.

In other words, link layer ARQ should be minimally persistent and done
only if the retransmit interval is much shorter than TCP's RTT
estimate. If it's not, then you have a controlled disaster. This has
been demonstrated before with PPP-over-TCP hacks.

> > The transport layer (most often TCP) assumes that the network layer
> > (IP) has minimal (and slowly varying) latency, but is lossy, and thus
> > that it has minimal buffering and little error control.
>
> Not true. Try running TCP on links with 20% packet losses.
> Also, any ethernet driver flow control the stack through
> netif_stop/start_queue() to avoid local overruns.

I said "lossy," not "high error rate." There's quite a difference
between the two. TCP finds the one (by definition) bottleneck in the
path by finding the point where packets drop and optimizing around
that. It just won't do that if there aren't losses, and the window
will open until the link-layer queue becomes a serious stability
problem.

(If the link can push back in Linux with local flow control, then the
question becomes: why doesn't that work with this application? Is
something missing from the IrDA interface or the PPP kernel bits that
prevent this from working right? And if it's the latter, why don't
regular serial users see the problem?)

(You can do better by dropping packets *earlier* -- see RED.)

> Already read those. Guess what, my name is event in the
> acknowledgments ! How bizzare ;-)

*Blush* I somehow forgot about your postings among the flood of draft
updates from Phil and odd flame-wars. :-/

--
James Carlson <[email protected]>

2002-03-05 22:50:38

by Bill Davidsen

[permalink] [raw]
Subject: Re: PPP feature request (Tx queue len + close)

On Mon, 4 Mar 2002, Jean Tourrilhes wrote:

> Tx queue length
> ---------------
> Problem : IrDA does its buffering (IrTTP is a sliding window
> protocol). PPP does its buffering (1 packet in ppp_generic +
> dev->tx_queue_len = 3). End result : a large number of packets queued
> for transmissions, which result in some network performance issues.
>
> Solution : could we allow the PPP channel to overwrite
> dev->tx_queue_len ?
> This is similar to the channel beeing able to set the MTUs and
> other parameters...

Random thoughts on this:
- ifconfig sets txlength, and could channels get into contention?
- if you reduce buffers too far performance sucks.
- did you look at just reducing the packet size (MTU)?

You should really use the above methods to diddle parameters and
benchmark. If nothing else you can point to numbers as a reason to make
any change.

--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.