2001-07-11 22:09:35

by Davide Libenzi

[permalink] [raw]
Subject: Improving (network) IO performance ...


For the ones interested I setup a page that describes the test contain links to
the patches and software used for the test and shows some fancy graph too :

http://www.xmailserver.org/linux-patches/nio-improve.html




- Davide


2001-07-12 00:35:54

by Dan Kegel

[permalink] [raw]
Subject: Re: Improving (network) IO performance ...

Very cool. Thanks for doing a no-scan implementation of /dev/poll!
Two questions:

1) have you compared its performance against Vitaly Luban's
signal-per-fd patch? Even though it's realtime-signal based,
there's some hope for it being quite efficient. See
http://www.luban.org/GPL/gpl.html and
http://boudicca.tux.org/hypermail/linux-kernel/2001week20/1353.html

2) A little birdie told me that someone had gotten a freebsd
box to handle something like half a million connections.
I would like to see you extend the horizontal axis of your graph
by a couple orders of magnitude :-)

Thanks,
Dan

p.s. I have updated http://www.kegel.com/c10k.html#nb./dev/poll
with a link to your report.

2001-07-12 05:05:05

by Davide Libenzi

[permalink] [raw]
Subject: Re: Improving (network) IO performance ...


On 12-Jul-2001 Dan Kegel wrote:
> Very cool. Thanks for doing a no-scan implementation of /dev/poll!
> Two questions:
>
> 1) have you compared its performance against Vitaly Luban's
> signal-per-fd patch? Even though it's realtime-signal based,
> there's some hope for it being quite efficient. See
> http://www.luban.org/GPL/gpl.html and
> http://boudicca.tux.org/hypermail/linux-kernel/2001week20/1353.html

There's more than the event collapsing inside the patch.
I saw the Luban's work but I decided to use the Lever-Provos /dev/poll as a
performance meter ( and the old poll() obviously ).
This coz I read papers where RT signals implementations resulted to have less
performance when compared to /dev/poll.


> 2) A little birdie told me that someone had gotten a freebsd
> box to handle something like half a million connections.
> I would like to see you extend the horizontal axis of your graph
> by a couple orders of magnitude :-)

Here You can find the new statistics with 16000 connections :

http://www.xmailserver.org/linux-patches/nio-improve.html

I cannot reach that number of connections of the test machine coz the socket
buffer space will eat all my memory ( 128 Mb ).
Anyway the graph speaks quite clear about the tendency to greater numbers of
connections.


> p.s. I have updated http://www.kegel.com/c10k.html#nb./dev/poll
> with a link to your report.

Thanks, the page is a work in progress anyway.




- Davide

2001-07-12 15:14:51

by Dan Kegel

[permalink] [raw]
Subject: Re: Improving (network) IO performance ...

Davide Libenzi wrote:
>
> On 12-Jul-2001 Dan Kegel wrote:
> > Very cool. Thanks for doing a no-scan implementation of /dev/poll!
> > Two questions:
> >
> > 1) have you compared its performance against Vitaly Luban's
> > signal-per-fd patch? Even though it's realtime-signal based,
> > there's some hope for it being quite efficient. See
> > http://www.luban.org/GPL/gpl.html and
> > http://boudicca.tux.org/hypermail/linux-kernel/2001week20/1353.html
>
> There's more than the event collapsing inside the patch.
> I saw the Luban's work but I decided to use the Lever-Provos /dev/poll as a
> performance meter ( and the old poll() obviously ).
> This coz I read papers where RT signals implementations resulted to have less
> performance when compared to /dev/poll.

It was absolutely great that you benchmarked it relative to the Lever-Provos
/dev/poll, since that made it quite clear yours scaled much better than theirs
to large numbers of idle sockets.

Perhaps somebody who uses the realtime signal - based readiness notifications
can enhance Davide's benchmark to also cover them (with and without Luban's patch)?
That would help those of us who are trying to decide whether to go with
/dev/poll or realtime signals.

> > 2) A little birdie told me that someone had gotten a freebsd
> > box to handle something like half a million connections.
> > I would like to see you extend the horizontal axis of your graph
> > by a couple orders of magnitude :-)
>
> Here You can find the new statistics with 16000 connections :
>
> http://www.xmailserver.org/linux-patches/nio-improve.html
>
> I cannot reach that number of connections of the test machine coz the socket
> buffer space will eat all my memory ( 128 Mb ).
> Anyway the graph speaks quite clear about the tendency to greater numbers of
> connections.

Yes, in fact, it shows that adding more dead connections actually
improves performance using your patch :-)

Here's what the little birdie told me exactly:
> You'll be happy to know I've achieved over 500,000 connections
> on Pentium hardware with 4G of RAM and 2 1Gbit cards on FreeBSD 4.3.

I think the application was similar to your benchmark configuration;
no idea what the ratio of live to dead connections was.
Let's see: if you have 1/32nd as much RAM as she had, you ought to
be able to handle 1/32nd as many connections, right? Since you
hit 16000 connections, I guess you just about did.
- Dan

2001-07-12 15:32:54

by Davide Libenzi

[permalink] [raw]
Subject: Re: Improving (network) IO performance ...


On 12-Jul-2001 Dan Kegel wrote:
> Yes, in fact, it shows that adding more dead connections actually
> improves performance using your patch :-)
>
> Here's what the little birdie told me exactly:
>> You'll be happy to know I've achieved over 500,000 connections
>> on Pentium hardware with 4G of RAM and 2 1Gbit cards on FreeBSD 4.3.
>
> I think the application was similar to your benchmark configuration;
> no idea what the ratio of live to dead connections was.
> Let's see: if you have 1/32nd as much RAM as she had, you ought to
> be able to handle 1/32nd as many connections, right? Since you
> hit 16000 connections, I guess you just about did.

I've done a couple of changes to the patch :

1) moved the file callback list handling from fs/file.c to fs/fcblist.c

2) moved the functions definitions from include/linux/file.h to
include/linux/fcblist.h

3) added a new kernel config param CONFIG_FCBLIST

4) renamed the patch from /dev/poll to /dev/epoll ( event poll )

5) renamed the devpoll.c(.h) files into eventpoll.c(.h)

6) made CONFIG_EPOLL dependent of CONFIG_FCBLIST

7) fixed a locking issue on SMP

8) kmalloc/vmalloc switch for big chunks of mem

9) increased the maximum number of fds to 128000 ( maybe I'll change this to be
unbounded )


The new stuff will be published today in the same link :

http://www.xmailserver.org/linux-patches/nio-improve.html

About the old /dev/poll patch it seems to have problems when the number of
connections go over 8000-9000.
I don't know what it could be coz I've looked deeply inside the patch.
Maybe Niels or Charles can be more precise about this issue.




- Davide