I got a bug report from a user who got this message in his logs:
lockd: too many open TCP sockets, consider increasing number of nfsd
threads.
...lockd also started refusing connections at this point. He was
apparently doing some testing with a highly contended lock. lockd
started refusing connections after the first 80 and started printing
this warning. He tried increasing the number of nfsd threads, which of
course didn't do any good. This patch removes the "nfsd" from the
message to make this a little less confusing.
There is still an artificial limit of 80 concurrent clients with lockd.
svc_check_conn_limits has this hardcoded check:
if (serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20) {
...my feeling is that we need to either raise the number or eliminate
this check for single-threaded services like lockd. I'd first like to
understand the rationale for setting the limit here, however. Can anyone
clarify?
Signed-off-by: Jeff Layton <[email protected]>
---
net/sunrpc/svc_xprt.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
index bf5b5cd..340f549 100644
--- a/net/sunrpc/svc_xprt.c
+++ b/net/sunrpc/svc_xprt.c
@@ -536,7 +536,7 @@ static void svc_check_conn_limits(struct svc_serv *serv)
/* Try to help the admin */
printk(KERN_NOTICE "%s: too many open "
"connections, consider increasing the "
- "number of nfsd threads\n",
+ "number of threads\n",
serv->sv_name);
}
/*
--
1.5.5.1
On Wed, 24 Sep 2008 17:57:42 -0400
"J. Bruce Fields" <[email protected]> wrote:
> On Fri, Sep 12, 2008 at 09:12:07AM -0400, Jeff Layton wrote:
> > I got a bug report from a user who got this message in his logs:
> >
> > lockd: too many open TCP sockets, consider increasing number of nfsd
> > threads.
> >
> > ...lockd also started refusing connections at this point. He was
> > apparently doing some testing with a highly contended lock. lockd
> > started refusing connections after the first 80 and started printing
> > this warning. He tried increasing the number of nfsd threads, which of
> > course didn't do any good. This patch removes the "nfsd" from the
> > message to make this a little less confusing.
> >
> > There is still an artificial limit of 80 concurrent clients with lockd.
> > svc_check_conn_limits has this hardcoded check:
> >
> > if (serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20) {
> >
> > ...my feeling is that we need to either raise the number or eliminate
> > this check for single-threaded services like lockd. I'd first like to
> > understand the rationale for setting the limit here, however. Can anyone
> > clarify?
>
> No idea, but yes, this is a problem.
>
> Brainstorming other options:
>
> - add a new sv_maxconnections field, give it a better default,
> and maybe make it tunable some day? (Oh goody, another knob
> to twiddle).
Ugh. Another tunable... :)
It would be nice to avoid this if at all possible.
> - implement the suggestion in the comment above this function
> and limit connections per ip address. I guess the idea would
> be to prevent a single buggy client from bringing everyone
> down. Is that really likely? Results in the presence of NAT
> could be hard to debug.
Yes, this seems somewhat unreliable. Still might be reasonable in
the absence of better ideas.
> - Base the limit on available memory instead of number of
> threads?
Possibly, but this would probably need to be tunable. If we do that
we might as well implement sv_maxconnections and just base the default
value on the amount of memory...
> - Kill the check entirely? It'd help to know whether it was
> originally prompted by some specific situation....
>
Another possibility is to just eliminate this check on single threaded
services. Basically change this:
if (serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20)
...to...
if (serv->sv_nrthreads > 1 &&
(serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20))
...that does mean that a single-threaded nfsd wouldn't get this
warning, but not many people run just 1 nfsd thread. I think all the
other in-tree RPC services are single threaded so they'd avoid this
limitation.
I agree though -- it would be nice to know what this check was
actually intended to do before we go changing it around. It looks
like this has been in place for a long time...
Some historical commits on this that might help clarify:
http://git.kernel.org/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=commitdiff;h=b114cbb6ee9ad47434ebbfadff9ec5c9c68d4cff
http://git.kernel.org/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=commitdiff;h=03c1bdb317baa0bde755a4587bfa6715d8ee6ea7
http://git.kernel.org/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=commitdiff;h=66f4554afe47eabe4993cd308c02b16ff9b6b06b
At a cursory glance, it looks like this check might be keeping us
from hitting other resource constraints. So if we remove them, we
may need to increase buffer sizes etc...
cc'ing Neil since his name is on some of the earlier patches. Neil,
any thoughts on how best to deal with this?
--
Jeff Layton <[email protected]>
On Fri, Sep 12, 2008 at 09:12:07AM -0400, Jeff Layton wrote:
> I got a bug report from a user who got this message in his logs:
>
> lockd: too many open TCP sockets, consider increasing number of nfsd
> threads.
>
> ...lockd also started refusing connections at this point. He was
> apparently doing some testing with a highly contended lock. lockd
> started refusing connections after the first 80 and started printing
> this warning. He tried increasing the number of nfsd threads, which of
> course didn't do any good. This patch removes the "nfsd" from the
> message to make this a little less confusing.
>
> There is still an artificial limit of 80 concurrent clients with lockd.
> svc_check_conn_limits has this hardcoded check:
>
> if (serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20) {
>
> ...my feeling is that we need to either raise the number or eliminate
> this check for single-threaded services like lockd. I'd first like to
> understand the rationale for setting the limit here, however. Can anyone
> clarify?
No idea, but yes, this is a problem.
Brainstorming other options:
- add a new sv_maxconnections field, give it a better default,
and maybe make it tunable some day? (Oh goody, another knob
to twiddle).
- implement the suggestion in the comment above this function
and limit connections per ip address. I guess the idea would
be to prevent a single buggy client from bringing everyone
down. Is that really likely? Results in the presence of NAT
could be hard to debug.
- Base the limit on available memory instead of number of
threads?
- Kill the check entirely? It'd help to know whether it was
originally prompted by some specific situation....
--b.
>
> Signed-off-by: Jeff Layton <[email protected]>
> ---
> net/sunrpc/svc_xprt.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index bf5b5cd..340f549 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -536,7 +536,7 @@ static void svc_check_conn_limits(struct svc_serv *serv)
> /* Try to help the admin */
> printk(KERN_NOTICE "%s: too many open "
> "connections, consider increasing the "
> - "number of nfsd threads\n",
> + "number of threads\n",
> serv->sv_name);
> }
> /*
> --
> 1.5.5.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, 25 Sep 2008 16:23:15 -0400
Jeff Layton <[email protected]> wrote:
> On Wed, 24 Sep 2008 17:57:42 -0400
> "J. Bruce Fields" <[email protected]> wrote:
>
> > On Fri, Sep 12, 2008 at 09:12:07AM -0400, Jeff Layton wrote:
> > > I got a bug report from a user who got this message in his logs:
> > >
> > > lockd: too many open TCP sockets, consider increasing number of nfsd
> > > threads.
> > >
> > > ...lockd also started refusing connections at this point. He was
> > > apparently doing some testing with a highly contended lock. lockd
> > > started refusing connections after the first 80 and started printing
> > > this warning. He tried increasing the number of nfsd threads, which of
> > > course didn't do any good. This patch removes the "nfsd" from the
> > > message to make this a little less confusing.
> > >
> > > There is still an artificial limit of 80 concurrent clients with lockd.
> > > svc_check_conn_limits has this hardcoded check:
> > >
> > > if (serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20) {
> > >
> > > ...my feeling is that we need to either raise the number or eliminate
> > > this check for single-threaded services like lockd. I'd first like to
> > > understand the rationale for setting the limit here, however. Can anyone
> > > clarify?
> >
> > No idea, but yes, this is a problem.
> >
> > Brainstorming other options:
> >
> > - add a new sv_maxconnections field, give it a better default,
> > and maybe make it tunable some day? (Oh goody, another knob
> > to twiddle).
>
> Ugh. Another tunable... :)
>
> It would be nice to avoid this if at all possible.
>
> > - implement the suggestion in the comment above this function
> > and limit connections per ip address. I guess the idea would
> > be to prevent a single buggy client from bringing everyone
> > down. Is that really likely? Results in the presence of NAT
> > could be hard to debug.
>
> Yes, this seems somewhat unreliable. Still might be reasonable in
> the absence of better ideas.
>
Now that I think about it, I don't think this would have helped the
person who reported this case. He said that he had ~200 clients
trying to lock the same file. Even if each client just has a single
TCP connection, he's still well over the limit of 80 here. I think
we can strike this one from the list of potential fixes.
> > - Base the limit on available memory instead of number of
> > threads?
>
> Possibly, but this would probably need to be tunable. If we do that
> we might as well implement sv_maxconnections and just base the default
> value on the amount of memory...
>
Any thoughts on what a reasonable heuristic would be here?
> > - Kill the check entirely? It'd help to know whether it was
> > originally prompted by some specific situation....
> >
>
> Another possibility is to just eliminate this check on single threaded
> services. Basically change this:
>
> if (serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20)
>
> ...to...
>
> if (serv->sv_nrthreads > 1 &&
> (serv->sv_tmpcnt > (serv->sv_nrthreads+3)*20))
>
> ...that does mean that a single-threaded nfsd wouldn't get this
> warning, but not many people run just 1 nfsd thread. I think all the
> other in-tree RPC services are single threaded so they'd avoid this
> limitation.
>
> I agree though -- it would be nice to know what this check was
> actually intended to do before we go changing it around. It looks
> like this has been in place for a long time...
>
> Some historical commits on this that might help clarify:
>
> http://git.kernel.org/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=commitdiff;h=b114cbb6ee9ad47434ebbfadff9ec5c9c68d4cff
>
> http://git.kernel.org/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=commitdiff;h=03c1bdb317baa0bde755a4587bfa6715d8ee6ea7
>
> http://git.kernel.org/?p=linux/kernel/git/torvalds/old-2.6-bkcvs.git;a=commitdiff;h=66f4554afe47eabe4993cd308c02b16ff9b6b06b
>
> At a cursory glance, it looks like this check might be keeping us
> from hitting other resource constraints. So if we remove them, we
> may need to increase buffer sizes etc...
>
> cc'ing Neil since his name is on some of the earlier patches. Neil,
> any thoughts on how best to deal with this?
>
>From the descriptions in those commits, it looks like the check
in svc_check_conn_limits was intended to prevent messages from too
many sockets overloading the receive buffers.
We size the UDP receive buffers by the number of threads:
(serv->sv_nrthreads+3) * serv->sv_max_mesg
TCP receive buffers are statically sized fairly small and don't ever
seem to change since sv_max_mesg is pretty much set at server
creation time:
3 * serv->sv_max_mesg
...the comments in svc_tcp_recvfrom explain the reason for it.
Given all of this, it seems reasonable to eliminate the check
entirely for TCP. For UDP, it looks like we expect that 1 buffer
can handle up to 20 connections. I'm not sure where this ratio
comes from though...
Anyone else have thoughts on this?
--
Jeff Layton <[email protected]>
On Wed, Oct 15, 2008 at 08:14:57AM -0400, Jeff Layton wrote:
> From the descriptions in those commits, it looks like the check
> in svc_check_conn_limits was intended to prevent messages from too
> many sockets overloading the receive buffers.
Aren't the receive buffers per-connection?
--b.
> We size the UDP receive buffers by the number of threads:
>
> (serv->sv_nrthreads+3) * serv->sv_max_mesg
>
> TCP receive buffers are statically sized fairly small and don't ever
> seem to change since sv_max_mesg is pretty much set at server
> creation time:
>
> 3 * serv->sv_max_mesg
>
> ...the comments in svc_tcp_recvfrom explain the reason for it.
>
> Given all of this, it seems reasonable to eliminate the check
> entirely for TCP. For UDP, it looks like we expect that 1 buffer
> can handle up to 20 connections. I'm not sure where this ratio
> comes from though...
>
> Anyone else have thoughts on this?
>
> --
> Jeff Layton <[email protected]>
On Wed, 15 Oct 2008 16:21:02 -0400
"J. Bruce Fields" <[email protected]> wrote:
> On Wed, Oct 15, 2008 at 08:14:57AM -0400, Jeff Layton wrote:
> > From the descriptions in those commits, it looks like the check
> > in svc_check_conn_limits was intended to prevent messages from too
> > many sockets overloading the receive buffers.
>
> Aren't the receive buffers per-connection?
>
> --b.
>
Oof...you're quite right. I misunderstood the code earlier. Thinking
this through a bit more...
I suppose the main worry is that we have a service with too few
threads and a ton of sockets. We end up in a situation where the
receive buffers aren't getting processed quickly enough and connections
start stalling out.
I suppose the only real remedy for that is to increase the number of
threads, but that's not an option for services like lockd. So maybe
ignoring this check on single-threaded services is the way to go after
all? I don't see a way to make this auto-tuning based on memory since
that doesn't seem to be what the check is intended to help...
--
Jeff Layton <[email protected]>
On Thu, October 16, 2008 11:51 am, Jeff Layton wrote:
> On Wed, 15 Oct 2008 16:21:02 -0400
> "J. Bruce Fields" <[email protected]> wrote:
>
>> On Wed, Oct 15, 2008 at 08:14:57AM -0400, Jeff Layton wrote:
>> > From the descriptions in those commits, it looks like the check
>> > in svc_check_conn_limits was intended to prevent messages from too
>> > many sockets overloading the receive buffers.
>>
>> Aren't the receive buffers per-connection?
>>
>> --b.
>>
>
> Oof...you're quite right. I misunderstood the code earlier. Thinking
> this through a bit more...
>
> I suppose the main worry is that we have a service with too few
> threads and a ton of sockets. We end up in a situation where the
> receive buffers aren't getting processed quickly enough and connections
> start stalling out.
>
> I suppose the only real remedy for that is to increase the number of
> threads, but that's not an option for services like lockd. So maybe
> ignoring this check on single-threaded services is the way to go after
> all? I don't see a way to make this auto-tuning based on memory since
> that doesn't seem to be what the check is intended to help...
Don't expect too much of the "intent" of this limit.
I think it just "seemed like a good idea at the time" with some idea that
a denial of service attack could swamp the NFS server with connections
if we didn't actively close some when there was a large number of them.
The patch which added the test is
http://git.kernel.org/?p=linux/kernel/git/tglx/history.git;a=commitdiff;h=afdb4fa2b04a7e90e0746bc3d031a552656c7709
which says nothing about why.
I found an old Email with a list of things I needed to do to be happy
with TCP support and it included:
- Guard against denial of service - impose some limit on the number of
incoming connections, and start randomly dropping connections when
this limit is exceeded.
Which again isn't very helpful.
I certainly wasn't thinking about the possibility of lockd getting
lots of connections despite being a single-thread service.
Maybe we should use
current->signal->rlim[RLIMIT_NOFILE].rlim_cur
as a minimum limit. i.e if the number of connections is below this,
allow the connection. If the number of connections is below the
currently calculated value, also allow the connection.
Only reject if the number is greater than both of these limits.
One problem there is that I don't think you can adjust the RLIMIT_NOFILE
limit for nfsd. It (it think) shares this number with init and all other
kernel threads.
NeilBrown
There is a related resource issue for the NFSv4.1 server DRC where the
server guarantees a per session DRC cache size. The server needs to
determine how much memory to assign to each session, and will
therefore need to limit the number of connections based not only on
the TCP buffer size, but also on memory. I'm writing the DRC code, and
I'm looking for suggestions on how to manage the per-session memory
resource. Any thought welcome..
Thanks
-->Andy
On Thu, Oct 16, 2008 at 8:14 PM, Neil Brown <[email protected]> wrote:
> On Thursday October 16, [email protected] wrote:
>>
>> Thanks for the info Neil, that helps clarify this...
>>
>> Using RLIMIT_NOFILE is an interesting idea. From a cursory look at the
>> code, the default for RLIMIT_NOFILE looks like it's generally 1024.
>> We'll have to figure that this limit will effectively act as a limit on
>> the number of concurrent lockd clients. It's not too hard to imagine a
>> server with more clients than this (think of a large compute cluster).
>
> If all those clients used UDP, this would not be a problem. While I
> see the value of TCP for NFS, it doesn't seem as convincing for NLM.
>
> But I don't expect we have the luxury of insisting that clients use
> UDP for locking :-(
>
>>
>> The problem as you mention, is that that limit won't be easily tunable.
>> I think we need some mechanism for an admin to tune this limit. It
>> doesn't have to be tunable on the fly, but shouldn't require a kernel
>> rebuild. We could eliminate this check for single-threaded services
>> entirely, but I suppose that leaves the door open for DoS attacks
>> against those services.
>>
>> Maybe the best thing is to go with Bruce's idea and add a sv_maxconn
>> field to the svc_serv struct. We could make that default to the max of
>> RLIMIT_NOFILE rlim_cur value or the currently calculated value.
>> Eventually we could add a mechanism to allow someone to tune that
>> value. A module parameter would probably be fine for lockd. We might
>> even want to set the limit lower for things like the nfsv4 callback
>> thread.
>>
>> Thoughts?
>
> A per-service setting that defaults to something reasonable like your
> suggestions and can be over-ridden by a module parameter sounds like a
> good idea.
> If you change the module parameter via
> /sys/modules/lockd/parameters/max_connections
> then it wouldn't take effect until the service were stopped and restarted,
> but I expect that is acceptable (and could probably be 'fixed' if really
> needed).
>
> NeilBrown
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
On Fri, Oct 17, 2008 at 11:14:29AM +1100, Neil Brown wrote:
> On Thursday October 16, [email protected] wrote:
> >
> > Thanks for the info Neil, that helps clarify this...
> >
> > Using RLIMIT_NOFILE is an interesting idea. From a cursory look at the
> > code, the default for RLIMIT_NOFILE looks like it's generally 1024.
> > We'll have to figure that this limit will effectively act as a limit on
> > the number of concurrent lockd clients. It's not too hard to imagine a
> > server with more clients than this (think of a large compute cluster).
>
> If all those clients used UDP, this would not be a problem. While I
> see the value of TCP for NFS, it doesn't seem as convincing for NLM.
>
> But I don't expect we have the luxury of insisting that clients use
> UDP for locking :-(
>
> >
> > The problem as you mention, is that that limit won't be easily tunable.
> > I think we need some mechanism for an admin to tune this limit. It
> > doesn't have to be tunable on the fly, but shouldn't require a kernel
> > rebuild. We could eliminate this check for single-threaded services
> > entirely, but I suppose that leaves the door open for DoS attacks
> > against those services.
> >
> > Maybe the best thing is to go with Bruce's idea and add a sv_maxconn
> > field to the svc_serv struct. We could make that default to the max of
> > RLIMIT_NOFILE rlim_cur value or the currently calculated value.
> > Eventually we could add a mechanism to allow someone to tune that
> > value. A module parameter would probably be fine for lockd. We might
> > even want to set the limit lower for things like the nfsv4 callback
> > thread.
> >
> > Thoughts?
>
> A per-service setting that defaults to something reasonable like your
> suggestions and can be over-ridden by a module parameter sounds like a
> good idea.
> If you change the module parameter via
> /sys/modules/lockd/parameters/max_connections
> then it wouldn't take effect until the service were stopped and restarted,
> but I expect that is acceptable (and could probably be 'fixed' if really
> needed).
Sounds fine to me.
And I'd probably fix the defaults now and then hold off adding
max_connections till it'd been seen to be needed.
--b.
On Fri, 17 Oct 2008 14:20:51 -0400
"J. Bruce Fields" <[email protected]> wrote:
> > A per-service setting that defaults to something reasonable like your
> > suggestions and can be over-ridden by a module parameter sounds like a
> > good idea.
> > If you change the module parameter via
> > /sys/modules/lockd/parameters/max_connections
> > then it wouldn't take effect until the service were stopped and restarted,
> > but I expect that is acceptable (and could probably be 'fixed' if really
> > needed).
>
> Sounds fine to me.
>
> And I'd probably fix the defaults now and then hold off adding
> max_connections till it'd been seen to be needed.
>
> --b.
I just posted a small patchset that implements this. I ended up adding
a module parameter for it since it was easy to do. I've only done basic
smoke testing with it, however. If it looks OK, I'll see if I can get
the guy who originally reported this to test it out.
Thanks,
--
Jeff Layton <[email protected]>
On Fri, Oct 17, 2008 at 10:55:38AM -0400, William A. (Andy) Adamson wrote:
> There is a related resource issue for the NFSv4.1 server DRC where the
> server guarantees a per session DRC cache size. The server needs to
> determine how much memory to assign to each session, and will
> therefore need to limit the number of connections based not only on
> the TCP buffer size, but also on memory. I'm writing the DRC code, and
> I'm looking for suggestions on how to manage the per-session memory
> resource. Any thought welcome..
As a starting point you could look at nfsd_create_serv() to see how it
decides max_blksize.
Caveat: Andrew Morton tells me we should be using nr_free_buffer_pages()
here in place of totalram.
Maybe we should define one function
nfsd_how_big_should_i_be()
that by default just returns some fraction of nr_free_buffer_pages();
and then use that return value to size various things like this.
I don't know....
--b.
>
> Thanks
>
> -->Andy
>
> On Thu, Oct 16, 2008 at 8:14 PM, Neil Brown <[email protected]> wrote:
> > On Thursday October 16, [email protected] wrote:
> >>
> >> Thanks for the info Neil, that helps clarify this...
> >>
> >> Using RLIMIT_NOFILE is an interesting idea. From a cursory look at the
> >> code, the default for RLIMIT_NOFILE looks like it's generally 1024.
> >> We'll have to figure that this limit will effectively act as a limit on
> >> the number of concurrent lockd clients. It's not too hard to imagine a
> >> server with more clients than this (think of a large compute cluster).
> >
> > If all those clients used UDP, this would not be a problem. While I
> > see the value of TCP for NFS, it doesn't seem as convincing for NLM.
> >
> > But I don't expect we have the luxury of insisting that clients use
> > UDP for locking :-(
> >
> >>
> >> The problem as you mention, is that that limit won't be easily tunable.
> >> I think we need some mechanism for an admin to tune this limit. It
> >> doesn't have to be tunable on the fly, but shouldn't require a kernel
> >> rebuild. We could eliminate this check for single-threaded services
> >> entirely, but I suppose that leaves the door open for DoS attacks
> >> against those services.
> >>
> >> Maybe the best thing is to go with Bruce's idea and add a sv_maxconn
> >> field to the svc_serv struct. We could make that default to the max of
> >> RLIMIT_NOFILE rlim_cur value or the currently calculated value.
> >> Eventually we could add a mechanism to allow someone to tune that
> >> value. A module parameter would probably be fine for lockd. We might
> >> even want to set the limit lower for things like the nfsv4 callback
> >> thread.
> >>
> >> Thoughts?
> >
> > A per-service setting that defaults to something reasonable like your
> > suggestions and can be over-ridden by a module parameter sounds like a
> > good idea.
> > If you change the module parameter via
> > /sys/modules/lockd/parameters/max_connections
> > then it wouldn't take effect until the service were stopped and restarted,
> > but I expect that is acceptable (and could probably be 'fixed' if really
> > needed).
> >
> > NeilBrown
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
On Fri, Oct 17, 2008 at 02:27:53PM -0400, Jeff Layton wrote:
> On Fri, 17 Oct 2008 14:20:51 -0400
> "J. Bruce Fields" <[email protected]> wrote:
>
> > > A per-service setting that defaults to something reasonable like your
> > > suggestions and can be over-ridden by a module parameter sounds like a
> > > good idea.
> > > If you change the module parameter via
> > > /sys/modules/lockd/parameters/max_connections
> > > then it wouldn't take effect until the service were stopped and restarted,
> > > but I expect that is acceptable (and could probably be 'fixed' if really
> > > needed).
> >
> > Sounds fine to me.
> >
> > And I'd probably fix the defaults now and then hold off adding
> > max_connections till it'd been seen to be needed.
> >
> > --b.
>
> I just posted a small patchset that implements this. I ended up adding
> a module parameter for it since it was easy to do. I've only done basic
> smoke testing with it, however. If it looks OK, I'll see if I can get
> the guy who originally reported this to test it out.
Thanks! I'll take a look.
--b.