2015-06-08 05:19:29

by Shirley Ma

[permalink] [raw]
Subject: [RFC PATCH V3 0/7] nfsd/sunrpc: prepare nfsd to add workqueue support

This patchset was originally written by Jeff Layton from adding support for a workqueue-based nfsd. I am helping on stability test and performance analysis. There are some workloads benefit from global threading mode, some workloads benefit from workqueue mode. I am still investigating on how to make workqueue mode better to bid global theading mode. I am splitting the patchset into two parts: one is preparing nfsd to add workqueue mode, one is adding workqueue mode. The test results show that the first part doesn't cause much performance change, the results are within the variation from each run.

sunrpc: add a new svc_serv_ops struct and move sv_shutdown into it
sunrpc: move sv_function into sv_ops
sunrpc: move sv_module parm into sv_ops
sunrpc: turn enqueueing a svc_xprt into a svc_serv operation
sunrpc: abstract out svc_set_num_threads to sv_ops
sunrpc: move pool_mode definitions into svc.h
sunrpc: factor svc_rqst allocation and freeing from sv_nrthreads refcounting

fs/lockd/svc.c | 7 ++-
fs/nfs/callback.c | 6 ++-
fs/nfsd/nfssvc.c | 17 ++++--
include/linux/sunrpc/svc.h | 68 +++++++++++++++++-------
include/linux/sunrpc/svc_xprt.h | 1 +
net/sunrpc/svc.c | 113 +++++++++++++++++++---------------------
net/sunrpc/svc_xprt.c | 10 ++--
7 files changed, 135 insertions(+), 87 deletions(-)

Shirley


2015-06-08 13:15:29

by Trond Myklebust

[permalink] [raw]
Subject: Re: [RFC PATCH V3 0/7] nfsd/sunrpc: prepare nfsd to add workqueue support

On Mon, Jun 8, 2015 at 1:19 AM, Shirley Ma <[email protected]> wrote:
> This patchset was originally written by Jeff Layton from adding support for a workqueue-based nfsd. I am helping on stability test and performance analysis. There are some workloads benefit from global threading mode, some workloads benefit from workqueue mode. I am still investigating on how to make workqueue mode better to bid global theading mode. I am splitting the patchset into two parts: one is preparing nfsd to add workqueue mode, one is adding workqueue mode. The test results show that the first part doesn't cause much performance change, the results are within the variation from each run.

As stated in the original emails, Primary Data's internal testing of
these patches showed that there is a significant difference. We had 48
virtual clients running on 7 ESX hypervisors with 10GigE NICs against
a hardware NFSv3 server with a 40GigE NIC. The clients were doing 4k
aio/dio reads+writes in a 70/30 mix.
At the time, we saw a roughly 50% decrease with measured standard
deviations being of the order a few % when comparing the performance
as measured in IOPs between the existing code and the workqueue code.

Testing showed the workqueue performance was relatively improved when
we upped the block size to 256k (with lower IOPs counts). That would
indicate that the workqueues are failing to scale correctly for the
high IOPs (i.e. high thread count) case.

Trond

2015-06-08 13:34:41

by Jeff Layton

[permalink] [raw]
Subject: Re: [RFC PATCH V3 0/7] nfsd/sunrpc: prepare nfsd to add workqueue support

On Mon, 8 Jun 2015 09:15:28 -0400
Trond Myklebust <[email protected]> wrote:

> On Mon, Jun 8, 2015 at 1:19 AM, Shirley Ma <[email protected]> wrote:
> > This patchset was originally written by Jeff Layton from adding support for a workqueue-based nfsd. I am helping on stability test and performance analysis. There are some workloads benefit from global threading mode, some workloads benefit from workqueue mode. I am still investigating on how to make workqueue mode better to bid global theading mode. I am splitting the patchset into two parts: one is preparing nfsd to add workqueue mode, one is adding workqueue mode. The test results show that the first part doesn't cause much performance change, the results are within the variation from each run.
>
> As stated in the original emails, Primary Data's internal testing of
> these patches showed that there is a significant difference. We had 48
> virtual clients running on 7 ESX hypervisors with 10GigE NICs against
> a hardware NFSv3 server with a 40GigE NIC. The clients were doing 4k
> aio/dio reads+writes in a 70/30 mix.
> At the time, we saw a roughly 50% decrease with measured standard
> deviations being of the order a few % when comparing the performance
> as measured in IOPs between the existing code and the workqueue code.
>
> Testing showed the workqueue performance was relatively improved when
> we upped the block size to 256k (with lower IOPs counts). That would
> indicate that the workqueues are failing to scale correctly for the
> high IOPs (i.e. high thread count) case.
>
> Trond

Yes.

Also, I did some work with tracepoints to see if I could figure out
where the extra latency was coming from. At the time, the data I
collected showed that the workqueue server was processing incoming
frames _faster_ than the thread based one.

So, my theory (untested) is that somehow the workqueue server is
causing extra latency in processing interrupts for the network card or
disk. The test I was using at the time was to just dd a bunch of
sequential reads and writes to a file. I figured that was as simple a
testcase as you could get...

In any case though, the patches that Shirley is proposing here don't
actually add any of the workqueue based server code. It's all just
preparatory patches to add an ops structure to the svc_serv and move a
bunch of disparate function pointers into it. I think that's a reasonable
set to take, even if we don't take the workqueue code itself as it
cleans up the rpc server API considerably (IMNSHO).

--
Jeff Layton <[email protected]>

2015-06-08 14:57:31

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [RFC PATCH V3 0/7] nfsd/sunrpc: prepare nfsd to add workqueue support

On Mon, Jun 08, 2015 at 09:34:37AM -0400, Jeff Layton wrote:
> On Mon, 8 Jun 2015 09:15:28 -0400
> Trond Myklebust <[email protected]> wrote:
>
> > On Mon, Jun 8, 2015 at 1:19 AM, Shirley Ma <[email protected]> wrote:
> > > This patchset was originally written by Jeff Layton from adding support for a workqueue-based nfsd. I am helping on stability test and performance analysis. There are some workloads benefit from global threading mode, some workloads benefit from workqueue mode. I am still investigating on how to make workqueue mode better to bid global theading mode. I am splitting the patchset into two parts: one is preparing nfsd to add workqueue mode, one is adding workqueue mode. The test results show that the first part doesn't cause much performance change, the results are within the variation from each run.
> >
> > As stated in the original emails, Primary Data's internal testing of
> > these patches showed that there is a significant difference. We had 48
> > virtual clients running on 7 ESX hypervisors with 10GigE NICs against
> > a hardware NFSv3 server with a 40GigE NIC. The clients were doing 4k
> > aio/dio reads+writes in a 70/30 mix.
> > At the time, we saw a roughly 50% decrease with measured standard
> > deviations being of the order a few % when comparing the performance
> > as measured in IOPs between the existing code and the workqueue code.
> >
> > Testing showed the workqueue performance was relatively improved when
> > we upped the block size to 256k (with lower IOPs counts). That would
> > indicate that the workqueues are failing to scale correctly for the
> > high IOPs (i.e. high thread count) case.
> >
> > Trond
>
> Yes.
>
> Also, I did some work with tracepoints to see if I could figure out
> where the extra latency was coming from. At the time, the data I
> collected showed that the workqueue server was processing incoming
> frames _faster_ than the thread based one.
>
> So, my theory (untested) is that somehow the workqueue server is
> causing extra latency in processing interrupts for the network card or
> disk. The test I was using at the time was to just dd a bunch of
> sequential reads and writes to a file. I figured that was as simple a
> testcase as you could get...
>
> In any case though, the patches that Shirley is proposing here don't
> actually add any of the workqueue based server code. It's all just
> preparatory patches to add an ops structure to the svc_serv and move a
> bunch of disparate function pointers into it. I think that's a reasonable
> set to take, even if we don't take the workqueue code itself as it
> cleans up the rpc server API considerably (IMNSHO).

OK. And just out of paranoia--you didn't notice any significant
difference with the new patches running in the old thread-based mode,
right?

--b.

2015-06-08 15:51:19

by Jeff Layton

[permalink] [raw]
Subject: Re: [RFC PATCH V3 0/7] nfsd/sunrpc: prepare nfsd to add workqueue support

On Mon, 8 Jun 2015 10:57:30 -0400
"J. Bruce Fields" <[email protected]> wrote:

> On Mon, Jun 08, 2015 at 09:34:37AM -0400, Jeff Layton wrote:
> > On Mon, 8 Jun 2015 09:15:28 -0400
> > Trond Myklebust <[email protected]> wrote:
> >
> > > On Mon, Jun 8, 2015 at 1:19 AM, Shirley Ma <[email protected]> wrote:
> > > > This patchset was originally written by Jeff Layton from adding support for a workqueue-based nfsd. I am helping on stability test and performance analysis. There are some workloads benefit from global threading mode, some workloads benefit from workqueue mode. I am still investigating on how to make workqueue mode better to bid global theading mode. I am splitting the patchset into two parts: one is preparing nfsd to add workqueue mode, one is adding workqueue mode. The test results show that the first part doesn't cause much performance change, the results are within the variation from each run.
> > >
> > > As stated in the original emails, Primary Data's internal testing of
> > > these patches showed that there is a significant difference. We had 48
> > > virtual clients running on 7 ESX hypervisors with 10GigE NICs against
> > > a hardware NFSv3 server with a 40GigE NIC. The clients were doing 4k
> > > aio/dio reads+writes in a 70/30 mix.
> > > At the time, we saw a roughly 50% decrease with measured standard
> > > deviations being of the order a few % when comparing the performance
> > > as measured in IOPs between the existing code and the workqueue code.
> > >
> > > Testing showed the workqueue performance was relatively improved when
> > > we upped the block size to 256k (with lower IOPs counts). That would
> > > indicate that the workqueues are failing to scale correctly for the
> > > high IOPs (i.e. high thread count) case.
> > >
> > > Trond
> >
> > Yes.
> >
> > Also, I did some work with tracepoints to see if I could figure out
> > where the extra latency was coming from. At the time, the data I
> > collected showed that the workqueue server was processing incoming
> > frames _faster_ than the thread based one.
> >
> > So, my theory (untested) is that somehow the workqueue server is
> > causing extra latency in processing interrupts for the network card or
> > disk. The test I was using at the time was to just dd a bunch of
> > sequential reads and writes to a file. I figured that was as simple a
> > testcase as you could get...
> >
> > In any case though, the patches that Shirley is proposing here don't
> > actually add any of the workqueue based server code. It's all just
> > preparatory patches to add an ops structure to the svc_serv and move a
> > bunch of disparate function pointers into it. I think that's a reasonable
> > set to take, even if we don't take the workqueue code itself as it
> > cleans up the rpc server API considerably (IMNSHO).
>
> OK. And just out of paranoia--you didn't notice any significant
> difference with the new patches running in the old thread-based mode,
> right?
>
> --b.

No, we didn't notice any substantive difference. I'd be quite surprised
if there were. There might be (e.g.) slightly more pointer-chasing with
this set (since you have to look in the ops structure), but it doesn't
make much in the way of substantive changes to how the code works.

--
Jeff Layton <[email protected]>

2015-06-08 16:10:04

by Shirley Ma

[permalink] [raw]
Subject: Re: [RFC PATCH V3 0/7] nfsd/sunrpc: prepare nfsd to add workqueue support

On 06/08/2015 08:51 AM, Jeff Layton wrote:
> On Mon, 8 Jun 2015 10:57:30 -0400
> "J. Bruce Fields" <[email protected]> wrote:
>
>> On Mon, Jun 08, 2015 at 09:34:37AM -0400, Jeff Layton wrote:
>>> On Mon, 8 Jun 2015 09:15:28 -0400
>>> Trond Myklebust <[email protected]> wrote:
>>>
>>>> On Mon, Jun 8, 2015 at 1:19 AM, Shirley Ma <[email protected]> wrote:
>>>>> This patchset was originally written by Jeff Layton from adding support for a workqueue-based nfsd. I am helping on stability test and performance analysis. There are some workloads benefit from global threading mode, some workloads benefit from workqueue mode. I am still investigating on how to make workqueue mode better to bid global theading mode. I am splitting the patchset into two parts: one is preparing nfsd to add workqueue mode, one is adding workqueue mode. The test results show that the first part doesn't cause much performance change, the results are within the variation from each run.
>>>>
>>>> As stated in the original emails, Primary Data's internal testing of
>>>> these patches showed that there is a significant difference. We had 48
>>>> virtual clients running on 7 ESX hypervisors with 10GigE NICs against
>>>> a hardware NFSv3 server with a 40GigE NIC. The clients were doing 4k
>>>> aio/dio reads+writes in a 70/30 mix.
>>>> At the time, we saw a roughly 50% decrease with measured standard
>>>> deviations being of the order a few % when comparing the performance
>>>> as measured in IOPs between the existing code and the workqueue code.
>>>>
>>>> Testing showed the workqueue performance was relatively improved when
>>>> we upped the block size to 256k (with lower IOPs counts). That would
>>>> indicate that the workqueues are failing to scale correctly for the
>>>> high IOPs (i.e. high thread count) case.
I am still investing the workqueue patchset performance gain and cost. I did see in some random write/read workloads workqueue made a big difference. If we can have both modes support, then the customer can switch modes based up their workloads preferences.

>>>> Trond
>>>
>>> Yes.
>>>
>>> Also, I did some work with tracepoints to see if I could figure out
>>> where the extra latency was coming from. At the time, the data I
>>> collected showed that the workqueue server was processing incoming
>>> frames _faster_ than the thread based one.
>>>
>>> So, my theory (untested) is that somehow the workqueue server is
>>> causing extra latency in processing interrupts for the network card or
>>> disk. The test I was using at the time was to just dd a bunch of
>>> sequential reads and writes to a file. I figured that was as simple a
>>> testcase as you could get...
>>>
>>> In any case though, the patches that Shirley is proposing here don't
>>> actually add any of the workqueue based server code. It's all just
>>> preparatory patches to add an ops structure to the svc_serv and move a
>>> bunch of disparate function pointers into it. I think that's a reasonable
>>> set to take, even if we don't take the workqueue code itself as it
>>> cleans up the rpc server API considerably (IMNSHO).
>>
>> OK. And just out of paranoia--you didn't notice any significant
>> difference with the new patches running in the old thread-based mode,
>> right?
>>
>> --b.
> No, we didn't notice any substantive difference. I'd be quite surprised
> if there were. There might be (e.g.) slightly more pointer-chasing with
> this set (since you have to look in the ops structure), but it doesn't
> make much in the way of substantive changes to how the code works.

Right, not much difference from my test results.

Shirley