Return-Path: linux-kernel-owner@vger.kernel.org MIME-Version: 1.0 In-Reply-To: <20141203152147.2ca6c6fd@tlielax.poochiereds.net> References: <1417544663-13299-1-git-send-email-jlayton@primarydata.com> <20141203121118.21a32fe1@notabene.brown> <20141202202946.1e0f399b@tlielax.poochiereds.net> <20141203155649.GB5013@htj.dyndns.org> <20141203110405.5ecc85df@tlielax.poochiereds.net> <20141203140202.7865bedb@tlielax.poochiereds.net> <20141203142034.5c14529d@tlielax.poochiereds.net> <20141203152147.2ca6c6fd@tlielax.poochiereds.net> Date: Wed, 3 Dec 2014 15:44:31 -0500 Message-ID: Subject: Re: [RFC PATCH 00/14] nfsd/sunrpc: add support for a workqueue-based nfsd From: Trond Myklebust To: Jeff Layton Cc: Tejun Heo , NeilBrown , Linux NFS Mailing List , Linux Kernel mailing list , Al Viro Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: On Wed, Dec 3, 2014 at 3:21 PM, Jeff Layton wrote: > On Wed, 3 Dec 2014 14:59:43 -0500 > Trond Myklebust wrote: > >> On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton wrote: >> > On Wed, 3 Dec 2014 14:08:01 -0500 >> > Trond Myklebust wrote: >> >> Which workqueue are you using? Since the receive code is non-blocking, >> >> I'd expect you might be able to use rpciod, for the initial socket >> >> reads, but you wouldn't want to use that for the actual knfsd >> >> processing. >> >> >> > >> > I'm using the same (nfsd) workqueue for everything. The workqueue >> > isn't really the bottleneck though, it's the work_struct. >> > >> > Basically, the problem is that the work_struct in the svc_xprt was >> > remaining busy for far too long. So, even though the XPT_BUSY bit had >> > cleared, the work wouldn't get picked up again until the previous >> > workqueue job had returned. >> > >> > With the change I made today, I just added a new work_struct to >> > svc_rqst and queue that to the same workqueue to do svc_process as soon >> > as the receive is done. That means though that each RPC ends up waiting >> > in the queue twice (once to do the receive and once to process the >> > RPC), and I think that's probably the reason for the performance delta. >> >> Why would the queuing latency still be significant now? >> > > That, I'm not clear on yet and that may not be why this is slower. But, > I was seeing slightly faster performance with reads before I made > today's changes. If changing how these jobs get queued doesn't help the > performance, then I'll have to look elsewhere... Do you have a good method for measuring that latency? If the queuing latency turns out to depend on the execution latency for each job, then perhaps running the message receives on a separate low latency queue could help (hence the suggestion to use rpciod). -- Trond Myklebust Linux NFS client maintainer, PrimaryData trond.myklebust@primarydata.com