2021-03-23 05:47:48

by Nagendra Tomar

[permalink] [raw]
Subject: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

From: Nagendra S Tomar <[email protected]>

If a clustered NFS server is behind an L4 loadbalancer the default
nconnect roundrobin policy may cause RPC requests to a file to be
sent to different cluster nodes. This is because the source port
would be different for all the nconnect connections.
While this should functionally work (since the cluster will usually
have a consistent view irrespective of which node is serving the
request), it may not be desirable from performance pov. As an
example we have an NFSv3 frontend to our Object store, where every
NFSv3 file is an object. Now if writes to the same file are sent
roundrobin to different cluster nodes, the writes become very
inefficient due to the consistency requirement for object update
being done from different nodes.
Similarly each node may maintain some kind of cache to serve the file
data/metadata requests faster and even in that case it helps to have
a xprt affinity for a file/dir.
In general we have seen such scheme to scale very well.

This patch introduces a new rpc_xprt_iter_ops for using an additional
u32 (filehandle hash) to affine RPCs to the same file to one xprt.
It adds a new mount option "ncpolicy=roundrobin|hash" which can be
used to select the nconnect multipath policy for a given mount and
pass the selected policy to the RPC client.
It adds a new rpc_procinfo member p_fhhash, which can be supplied
by the specific RPC programs to return a u32 hash of the file/dir the
RPC is targetting, and lastly it provides p_fhhash implementation
for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.

Thoughts?

Thanks,
Tomar

Nagendra S Tomar (5):
SUNRPC: Add a new multipath xprt policy for xprt selection based
on target filehandle hash
SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the nconnect
policy and pass it down from mount option to rpc layer
SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN -> RPC_TASK_USE_MAIN_XPRT
NFSv3: Add hash computation methods for NFSv3 RPCs
NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs

fs/nfs/client.c | 3 +
fs/nfs/fs_context.c | 26 ++
fs/nfs/internal.h | 2 +
fs/nfs/nfs3client.c | 4 +-
fs/nfs/nfs3xdr.c | 154 +++++++++++
fs/nfs/nfs42xdr.c | 112 ++++++++
fs/nfs/nfs4client.c | 14 +-
fs/nfs/nfs4proc.c | 18 +-
fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
fs/nfs/super.c | 7 +-
include/linux/nfs_fs_sb.h | 1 +
include/linux/sunrpc/clnt.h | 15 +
include/linux/sunrpc/sched.h | 2 +-
include/linux/sunrpc/xprtmultipath.h | 9 +-
include/trace/events/sunrpc.h | 4 +-
net/sunrpc/clnt.c | 38 ++-
net/sunrpc/xprtmultipath.c | 91 +++++-
17 files changed, 913 insertions(+), 103 deletions(-)


2021-03-23 13:17:35

by Tom Talpey

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

All the patches in this series have the same subject/title. They
really should have more context so they stand alone and can be
reviewed separately.

High level question below.

On 3/23/2021 1:46 AM, Nagendra Tomar wrote:
> From: Nagendra S Tomar <[email protected]>
>
> If a clustered NFS server is behind an L4 loadbalancer the default
> nconnect roundrobin policy may cause RPC requests to a file to be
> sent to different cluster nodes. This is because the source port
> would be different for all the nconnect connections.
> While this should functionally work (since the cluster will usually
> have a consistent view irrespective of which node is serving the
> request), it may not be desirable from performance pov. As an
> example we have an NFSv3 frontend to our Object store, where every
> NFSv3 file is an object. Now if writes to the same file are sent
> roundrobin to different cluster nodes, the writes become very
> inefficient due to the consistency requirement for object update
> being done from different nodes.
> Similarly each node may maintain some kind of cache to serve the file
> data/metadata requests faster and even in that case it helps to have
> a xprt affinity for a file/dir.
> In general we have seen such scheme to scale very well.
>
> This patch introduces a new rpc_xprt_iter_ops for using an additional
> u32 (filehandle hash) to affine RPCs to the same file to one xprt.
> It adds a new mount option "ncpolicy=roundrobin|hash" which can be
> used to select the nconnect multipath policy for a given mount and
> pass the selected policy to the RPC client.

What's the reason for exposing these as a mount option, with multiple
values? What makes one value better than the other, and why is there
not a default?

Tom.

> It adds a new rpc_procinfo member p_fhhash, which can be supplied
> by the specific RPC programs to return a u32 hash of the file/dir the
> RPC is targetting, and lastly it provides p_fhhash implementation
> for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.
>
> Thoughts?
>
> Thanks,
> Tomar
>
> Nagendra S Tomar (5):
> SUNRPC: Add a new multipath xprt policy for xprt selection based
> on target filehandle hash
> SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the nconnect
> policy and pass it down from mount option to rpc layer
> SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN -> RPC_TASK_USE_MAIN_XPRT
> NFSv3: Add hash computation methods for NFSv3 RPCs
> NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
>
> fs/nfs/client.c | 3 +
> fs/nfs/fs_context.c | 26 ++
> fs/nfs/internal.h | 2 +
> fs/nfs/nfs3client.c | 4 +-
> fs/nfs/nfs3xdr.c | 154 +++++++++++
> fs/nfs/nfs42xdr.c | 112 ++++++++
> fs/nfs/nfs4client.c | 14 +-
> fs/nfs/nfs4proc.c | 18 +-
> fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
> fs/nfs/super.c | 7 +-
> include/linux/nfs_fs_sb.h | 1 +
> include/linux/sunrpc/clnt.h | 15 +
> include/linux/sunrpc/sched.h | 2 +-
> include/linux/sunrpc/xprtmultipath.h | 9 +-
> include/trace/events/sunrpc.h | 4 +-
> net/sunrpc/clnt.c | 38 ++-
> net/sunrpc/xprtmultipath.c | 91 +++++-
> 17 files changed, 913 insertions(+), 103 deletions(-)
>

2021-03-23 13:55:48

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection



> On Mar 23, 2021, at 1:46 AM, Nagendra Tomar <[email protected]> wrote:
>
> From: Nagendra S Tomar <[email protected]>
>
> If a clustered NFS server is behind an L4 loadbalancer the default
> nconnect roundrobin policy may cause RPC requests to a file to be
> sent to different cluster nodes. This is because the source port
> would be different for all the nconnect connections.
> While this should functionally work (since the cluster will usually
> have a consistent view irrespective of which node is serving the
> request), it may not be desirable from performance pov. As an
> example we have an NFSv3 frontend to our Object store, where every
> NFSv3 file is an object. Now if writes to the same file are sent
> roundrobin to different cluster nodes, the writes become very
> inefficient due to the consistency requirement for object update
> being done from different nodes.
> Similarly each node may maintain some kind of cache to serve the file
> data/metadata requests faster and even in that case it helps to have
> a xprt affinity for a file/dir.
> In general we have seen such scheme to scale very well.
>
> This patch introduces a new rpc_xprt_iter_ops for using an additional
> u32 (filehandle hash) to affine RPCs to the same file to one xprt.
> It adds a new mount option "ncpolicy=roundrobin|hash" which can be
> used to select the nconnect multipath policy for a given mount and
> pass the selected policy to the RPC client.

This sets off my "not another administrative knob that has
to be tested and maintained, and can be abused" allergy.

Also, my "because connections are shared by mounts of the same
server, all those mounts will all adopt this behavior" rhinitis.

And my "why add a new feature to a legacy NFS version" hives.


I agree that your scenario can and should be addressed somehow.
I'd really rather see this done with pNFS.

Since you are proposing patches against the upstream NFS client,
I presume all your clients /can/ support NFSv4.1+. It's the NFS
servers that are stuck on NFSv3, correct?

The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
servers. In fact it was designed for exactly this kind of mix of
NFS versions.

No client code change will be necessary -- there are a lot more
clients than servers. The MDS can be made to work smartly in
concert with the load balancer, over time; or it can adopt other
clever strategies.

IMHO pNFS is the better long-term strategy here.


> It adds a new rpc_procinfo member p_fhhash, which can be supplied
> by the specific RPC programs to return a u32 hash of the file/dir the
> RPC is targetting, and lastly it provides p_fhhash implementation
> for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.
>
> Thoughts?
>
> Thanks,
> Tomar
>
> Nagendra S Tomar (5):
> SUNRPC: Add a new multipath xprt policy for xprt selection based
> on target filehandle hash
> SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the nconnect
> policy and pass it down from mount option to rpc layer
> SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN -> RPC_TASK_USE_MAIN_XPRT
> NFSv3: Add hash computation methods for NFSv3 RPCs
> NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
>
> fs/nfs/client.c | 3 +
> fs/nfs/fs_context.c | 26 ++
> fs/nfs/internal.h | 2 +
> fs/nfs/nfs3client.c | 4 +-
> fs/nfs/nfs3xdr.c | 154 +++++++++++
> fs/nfs/nfs42xdr.c | 112 ++++++++
> fs/nfs/nfs4client.c | 14 +-
> fs/nfs/nfs4proc.c | 18 +-
> fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
> fs/nfs/super.c | 7 +-
> include/linux/nfs_fs_sb.h | 1 +
> include/linux/sunrpc/clnt.h | 15 +
> include/linux/sunrpc/sched.h | 2 +-
> include/linux/sunrpc/xprtmultipath.h | 9 +-
> include/trace/events/sunrpc.h | 4 +-
> net/sunrpc/clnt.c | 38 ++-
> net/sunrpc/xprtmultipath.c | 91 +++++-
> 17 files changed, 913 insertions(+), 103 deletions(-)

--
Chuck Lever



2021-03-23 14:43:37

by Nagendra Tomar

[permalink] [raw]
Subject: RE: [EXTERNAL] Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

> All the patches in this series have the same subject/title. They
> really should have more context so they stand alone and can be
> reviewed separately.

Thanks for pointing, that was clearly unintentional. Though each patch does
briefly describe what it does. Let me know if I should re-send.

>
> High level question below.
>
> On 3/23/2021 1:46 AM, Nagendra Tomar wrote:
> > From: Nagendra S Tomar <[email protected]>
> >
> > If a clustered NFS server is behind an L4 loadbalancer the default
> > nconnect roundrobin policy may cause RPC requests to a file to be
> > sent to different cluster nodes. This is because the source port
> > would be different for all the nconnect connections.
> > While this should functionally work (since the cluster will usually
> > have a consistent view irrespective of which node is serving the
> > request), it may not be desirable from performance pov. As an
> > example we have an NFSv3 frontend to our Object store, where every
> > NFSv3 file is an object. Now if writes to the same file are sent
> > roundrobin to different cluster nodes, the writes become very
> > inefficient due to the consistency requirement for object update
> > being done from different nodes.
> > Similarly each node may maintain some kind of cache to serve the file
> > data/metadata requests faster and even in that case it helps to have
> > a xprt affinity for a file/dir.
> > In general we have seen such scheme to scale very well.
> >
> > This patch introduces a new rpc_xprt_iter_ops for using an additional
> > u32 (filehandle hash) to affine RPCs to the same file to one xprt.
> > It adds a new mount option "ncpolicy=roundrobin|hash" which can be
> > used to select the nconnect multipath policy for a given mount and
> > pass the selected policy to the RPC client.
>
> What's the reason for exposing these as a mount option, with multiple
> values? What makes one value better than the other, and why is there
> not a default?

The idea is to select how RPC requests to the same file pick the outgoing
connection. ncpolicy=roundrobin causes the roundrobin connection selection
where each RPC is sent over the next connection. This is existing behavior and
hence is the default too.
With ncpolicy=hash mount option, each RPC request to the same file/dir, uses
the same connection. This connection affinity is the main reason behind this
patch.

>
> > It adds a new rpc_procinfo member p_fhhash, which can be supplied
> > by the specific RPC programs to return a u32 hash of the file/dir the
> > RPC is targetting, and lastly it provides p_fhhash implementation
> > for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.
> >
> > Thoughts?
> >
> > Thanks,
> > Tomar
> >
> > Nagendra S Tomar (5):
> > SUNRPC: Add a new multipath xprt policy for xprt selection based
> > on target filehandle hash
> > SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the
> nconnect
> > policy and pass it down from mount option to rpc layer
> > SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN ->
> RPC_TASK_USE_MAIN_XPRT
> > NFSv3: Add hash computation methods for NFSv3 RPCs
> > NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
> >
> > fs/nfs/client.c | 3 +
> > fs/nfs/fs_context.c | 26 ++
> > fs/nfs/internal.h | 2 +
> > fs/nfs/nfs3client.c | 4 +-
> > fs/nfs/nfs3xdr.c | 154 +++++++++++
> > fs/nfs/nfs42xdr.c | 112 ++++++++
> > fs/nfs/nfs4client.c | 14 +-
> > fs/nfs/nfs4proc.c | 18 +-
> > fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
> > fs/nfs/super.c | 7 +-
> > include/linux/nfs_fs_sb.h | 1 +
> > include/linux/sunrpc/clnt.h | 15 +
> > include/linux/sunrpc/sched.h | 2 +-
> > include/linux/sunrpc/xprtmultipath.h | 9 +-
> > include/trace/events/sunrpc.h | 4 +-
> > net/sunrpc/clnt.c | 38 ++-
> > net/sunrpc/xprtmultipath.c | 91 +++++-
> > 17 files changed, 913 insertions(+), 103 deletions(-)
> >

2021-03-23 15:58:44

by Nagendra Tomar

[permalink] [raw]
Subject: RE: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

> > On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
> <[email protected]> wrote:
> >
> > From: Nagendra S Tomar <[email protected]>
> >
> > If a clustered NFS server is behind an L4 loadbalancer the default
> > nconnect roundrobin policy may cause RPC requests to a file to be
> > sent to different cluster nodes. This is because the source port
> > would be different for all the nconnect connections.
> > While this should functionally work (since the cluster will usually
> > have a consistent view irrespective of which node is serving the
> > request), it may not be desirable from performance pov. As an
> > example we have an NFSv3 frontend to our Object store, where every
> > NFSv3 file is an object. Now if writes to the same file are sent
> > roundrobin to different cluster nodes, the writes become very
> > inefficient due to the consistency requirement for object update
> > being done from different nodes.
> > Similarly each node may maintain some kind of cache to serve the file
> > data/metadata requests faster and even in that case it helps to have
> > a xprt affinity for a file/dir.
> > In general we have seen such scheme to scale very well.
> >
> > This patch introduces a new rpc_xprt_iter_ops for using an additional
> > u32 (filehandle hash) to affine RPCs to the same file to one xprt.
> > It adds a new mount option "ncpolicy=roundrobin|hash" which can be
> > used to select the nconnect multipath policy for a given mount and
> > pass the selected policy to the RPC client.
>
> This sets off my "not another administrative knob that has
> to be tested and maintained, and can be abused" allergy.
>
> Also, my "because connections are shared by mounts of the same
> server, all those mounts will all adopt this behavior" rhinitis.

Yes, it's fair to call this out, but ncpolicy behaves like the nconnect
parameter in this regards.

> And my "why add a new feature to a legacy NFS version" hives.
>
>
> I agree that your scenario can and should be addressed somehow.
> I'd really rather see this done with pNFS.
>
> Since you are proposing patches against the upstream NFS client,
> I presume all your clients /can/ support NFSv4.1+. It's the NFS
> servers that are stuck on NFSv3, correct?

Yes.

>
> The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
> servers. In fact it was designed for exactly this kind of mix of
> NFS versions.
>
> No client code change will be necessary -- there are a lot more
> clients than servers. The MDS can be made to work smartly in
> concert with the load balancer, over time; or it can adopt other
> clever strategies.
>
> IMHO pNFS is the better long-term strategy here.

The fundamental difference here is that the clustered NFSv3 server
is available over a single virtual IP, so IIUC even if we were to use
NFSv41 with flexfiles layout, all it can handover to the client is that single
(load-balanced) virtual IP and now when the clients do connect to the
NFSv3 DS we still have the same issue. Am I understanding you right?
Can you pls elaborate what you mean by "MDS can be made to work
smartly in concert with the load balancer"?

>
> > It adds a new rpc_procinfo member p_fhhash, which can be supplied
> > by the specific RPC programs to return a u32 hash of the file/dir the
> > RPC is targetting, and lastly it provides p_fhhash implementation
> > for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.
> >
> > Thoughts?
> >
> > Thanks,
> > Tomar
> >
> > Nagendra S Tomar (5):
> > SUNRPC: Add a new multipath xprt policy for xprt selection based
> > on target filehandle hash
> > SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the
> nconnect
> > policy and pass it down from mount option to rpc layer
> > SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN ->
> RPC_TASK_USE_MAIN_XPRT
> > NFSv3: Add hash computation methods for NFSv3 RPCs
> > NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
> >
> > fs/nfs/client.c | 3 +
> > fs/nfs/fs_context.c | 26 ++
> > fs/nfs/internal.h | 2 +
> > fs/nfs/nfs3client.c | 4 +-
> > fs/nfs/nfs3xdr.c | 154 +++++++++++
> > fs/nfs/nfs42xdr.c | 112 ++++++++
> > fs/nfs/nfs4client.c | 14 +-
> > fs/nfs/nfs4proc.c | 18 +-
> > fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
> > fs/nfs/super.c | 7 +-
> > include/linux/nfs_fs_sb.h | 1 +
> > include/linux/sunrpc/clnt.h | 15 +
> > include/linux/sunrpc/sched.h | 2 +-
> > include/linux/sunrpc/xprtmultipath.h | 9 +-
> > include/trace/events/sunrpc.h | 4 +-
> > net/sunrpc/clnt.c | 38 ++-
> > net/sunrpc/xprtmultipath.c | 91 +++++-
> > 17 files changed, 913 insertions(+), 103 deletions(-)
>
> --
> Chuck Lever
>
>

2021-03-23 16:16:37

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection



> On Mar 23, 2021, at 11:57 AM, Nagendra Tomar <[email protected]> wrote:
>
>>> On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
>> <[email protected]> wrote:
>>>
>>> From: Nagendra S Tomar <[email protected]>
>>>
>>> If a clustered NFS server is behind an L4 loadbalancer the default
>>> nconnect roundrobin policy may cause RPC requests to a file to be
>>> sent to different cluster nodes. This is because the source port
>>> would be different for all the nconnect connections.
>>> While this should functionally work (since the cluster will usually
>>> have a consistent view irrespective of which node is serving the
>>> request), it may not be desirable from performance pov. As an
>>> example we have an NFSv3 frontend to our Object store, where every
>>> NFSv3 file is an object. Now if writes to the same file are sent
>>> roundrobin to different cluster nodes, the writes become very
>>> inefficient due to the consistency requirement for object update
>>> being done from different nodes.
>>> Similarly each node may maintain some kind of cache to serve the file
>>> data/metadata requests faster and even in that case it helps to have
>>> a xprt affinity for a file/dir.
>>> In general we have seen such scheme to scale very well.
>>>
>>> This patch introduces a new rpc_xprt_iter_ops for using an additional
>>> u32 (filehandle hash) to affine RPCs to the same file to one xprt.
>>> It adds a new mount option "ncpolicy=roundrobin|hash" which can be
>>> used to select the nconnect multipath policy for a given mount and
>>> pass the selected policy to the RPC client.
>>
>> This sets off my "not another administrative knob that has
>> to be tested and maintained, and can be abused" allergy.
>>
>> Also, my "because connections are shared by mounts of the same
>> server, all those mounts will all adopt this behavior" rhinitis.
>
> Yes, it's fair to call this out, but ncpolicy behaves like the nconnect
> parameter in this regards.
>
>> And my "why add a new feature to a legacy NFS version" hives.
>>
>>
>> I agree that your scenario can and should be addressed somehow.
>> I'd really rather see this done with pNFS.
>>
>> Since you are proposing patches against the upstream NFS client,
>> I presume all your clients /can/ support NFSv4.1+. It's the NFS
>> servers that are stuck on NFSv3, correct?
>
> Yes.
>
>>
>> The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
>> servers. In fact it was designed for exactly this kind of mix of
>> NFS versions.
>>
>> No client code change will be necessary -- there are a lot more
>> clients than servers. The MDS can be made to work smartly in
>> concert with the load balancer, over time; or it can adopt other
>> clever strategies.
>>
>> IMHO pNFS is the better long-term strategy here.
>
> The fundamental difference here is that the clustered NFSv3 server
> is available over a single virtual IP, so IIUC even if we were to use
> NFSv41 with flexfiles layout, all it can handover to the client is that single
> (load-balanced) virtual IP and now when the clients do connect to the
> NFSv3 DS we still have the same issue. Am I understanding you right?
> Can you pls elaborate what you mean by "MDS can be made to work
> smartly in concert with the load balancer"?

I had thought there were multiple NFSv3 server targets in play.

If the load balancer is making them look like a single IP address,
then take it out of the equation: expose all the NFSv3 servers to
the clients and let the MDS direct operations to each data server.

AIUI this is the approach (without the use of NFSv3) taken by
NetApp next generation clusters.


>>> It adds a new rpc_procinfo member p_fhhash, which can be supplied
>>> by the specific RPC programs to return a u32 hash of the file/dir the
>>> RPC is targetting, and lastly it provides p_fhhash implementation
>>> for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.
>>>
>>> Thoughts?
>>>
>>> Thanks,
>>> Tomar
>>>
>>> Nagendra S Tomar (5):
>>> SUNRPC: Add a new multipath xprt policy for xprt selection based
>>> on target filehandle hash
>>> SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the
>> nconnect
>>> policy and pass it down from mount option to rpc layer
>>> SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN ->
>> RPC_TASK_USE_MAIN_XPRT
>>> NFSv3: Add hash computation methods for NFSv3 RPCs
>>> NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
>>>
>>> fs/nfs/client.c | 3 +
>>> fs/nfs/fs_context.c | 26 ++
>>> fs/nfs/internal.h | 2 +
>>> fs/nfs/nfs3client.c | 4 +-
>>> fs/nfs/nfs3xdr.c | 154 +++++++++++
>>> fs/nfs/nfs42xdr.c | 112 ++++++++
>>> fs/nfs/nfs4client.c | 14 +-
>>> fs/nfs/nfs4proc.c | 18 +-
>>> fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
>>> fs/nfs/super.c | 7 +-
>>> include/linux/nfs_fs_sb.h | 1 +
>>> include/linux/sunrpc/clnt.h | 15 +
>>> include/linux/sunrpc/sched.h | 2 +-
>>> include/linux/sunrpc/xprtmultipath.h | 9 +-
>>> include/trace/events/sunrpc.h | 4 +-
>>> net/sunrpc/clnt.c | 38 ++-
>>> net/sunrpc/xprtmultipath.c | 91 +++++-
>>> 17 files changed, 913 insertions(+), 103 deletions(-)
>>
>> --
>> Chuck Lever

--
Chuck Lever



2021-03-23 16:32:23

by Nagendra Tomar

[permalink] [raw]
Subject: RE: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

> > On Mar 23, 2021, at 11:57 AM, Nagendra Tomar
> <[email protected]> wrote:
> >
> >>> On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
> >> <[email protected]> wrote:
> >>>
> >>> From: Nagendra S Tomar <[email protected]>
> >>>
> >>> If a clustered NFS server is behind an L4 loadbalancer the default
> >>> nconnect roundrobin policy may cause RPC requests to a file to be
> >>> sent to different cluster nodes. This is because the source port
> >>> would be different for all the nconnect connections.
> >>> While this should functionally work (since the cluster will usually
> >>> have a consistent view irrespective of which node is serving the
> >>> request), it may not be desirable from performance pov. As an
> >>> example we have an NFSv3 frontend to our Object store, where every
> >>> NFSv3 file is an object. Now if writes to the same file are sent
> >>> roundrobin to different cluster nodes, the writes become very
> >>> inefficient due to the consistency requirement for object update
> >>> being done from different nodes.
> >>> Similarly each node may maintain some kind of cache to serve the file
> >>> data/metadata requests faster and even in that case it helps to have
> >>> a xprt affinity for a file/dir.
> >>> In general we have seen such scheme to scale very well.
> >>>
> >>> This patch introduces a new rpc_xprt_iter_ops for using an additional
> >>> u32 (filehandle hash) to affine RPCs to the same file to one xprt.
> >>> It adds a new mount option "ncpolicy=roundrobin|hash" which can be
> >>> used to select the nconnect multipath policy for a given mount and
> >>> pass the selected policy to the RPC client.
> >>
> >> This sets off my "not another administrative knob that has
> >> to be tested and maintained, and can be abused" allergy.
> >>
> >> Also, my "because connections are shared by mounts of the same
> >> server, all those mounts will all adopt this behavior" rhinitis.
> >
> > Yes, it's fair to call this out, but ncpolicy behaves like the nconnect
> > parameter in this regards.
> >
> >> And my "why add a new feature to a legacy NFS version" hives.
> >>
> >>
> >> I agree that your scenario can and should be addressed somehow.
> >> I'd really rather see this done with pNFS.
> >>
> >> Since you are proposing patches against the upstream NFS client,
> >> I presume all your clients /can/ support NFSv4.1+. It's the NFS
> >> servers that are stuck on NFSv3, correct?
> >
> > Yes.
> >
> >>
> >> The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
> >> servers. In fact it was designed for exactly this kind of mix of
> >> NFS versions.
> >>
> >> No client code change will be necessary -- there are a lot more
> >> clients than servers. The MDS can be made to work smartly in
> >> concert with the load balancer, over time; or it can adopt other
> >> clever strategies.
> >>
> >> IMHO pNFS is the better long-term strategy here.
> >
> > The fundamental difference here is that the clustered NFSv3 server
> > is available over a single virtual IP, so IIUC even if we were to use
> > NFSv41 with flexfiles layout, all it can handover to the client is that single
> > (load-balanced) virtual IP and now when the clients do connect to the
> > NFSv3 DS we still have the same issue. Am I understanding you right?
> > Can you pls elaborate what you mean by "MDS can be made to work
> > smartly in concert with the load balancer"?
>
> I had thought there were multiple NFSv3 server targets in play.
>
> If the load balancer is making them look like a single IP address,
> then take it out of the equation: expose all the NFSv3 servers to
> the clients and let the MDS direct operations to each data server.
>
> AIUI this is the approach (without the use of NFSv3) taken by
> NetApp next generation clusters.

Yeah, if could have clients access all the NFSv3 servers then I agree, pNFS
would be a viable option. Unfortunately that's not an option in this case. The
cluster has 100's of nodes and it's not an on-prem server, but a cloud service,
so the simplicity of the single LB VIP is critical.

>
> >>> It adds a new rpc_procinfo member p_fhhash, which can be supplied
> >>> by the specific RPC programs to return a u32 hash of the file/dir the
> >>> RPC is targetting, and lastly it provides p_fhhash implementation
> >>> for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.
> >>>
> >>> Thoughts?
> >>>
> >>> Thanks,
> >>> Tomar
> >>>
> >>> Nagendra S Tomar (5):
> >>> SUNRPC: Add a new multipath xprt policy for xprt selection based
> >>> on target filehandle hash
> >>> SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the
> >> nconnect
> >>> policy and pass it down from mount option to rpc layer
> >>> SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN ->
> >> RPC_TASK_USE_MAIN_XPRT
> >>> NFSv3: Add hash computation methods for NFSv3 RPCs
> >>> NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
> >>>
> >>> fs/nfs/client.c | 3 +
> >>> fs/nfs/fs_context.c | 26 ++
> >>> fs/nfs/internal.h | 2 +
> >>> fs/nfs/nfs3client.c | 4 +-
> >>> fs/nfs/nfs3xdr.c | 154 +++++++++++
> >>> fs/nfs/nfs42xdr.c | 112 ++++++++
> >>> fs/nfs/nfs4client.c | 14 +-
> >>> fs/nfs/nfs4proc.c | 18 +-
> >>> fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
> >>> fs/nfs/super.c | 7 +-
> >>> include/linux/nfs_fs_sb.h | 1 +
> >>> include/linux/sunrpc/clnt.h | 15 +
> >>> include/linux/sunrpc/sched.h | 2 +-
> >>> include/linux/sunrpc/xprtmultipath.h | 9 +-
> >>> include/trace/events/sunrpc.h | 4 +-
> >>> net/sunrpc/clnt.c | 38 ++-
> >>> net/sunrpc/xprtmultipath.c | 91 +++++-
> >>> 17 files changed, 913 insertions(+), 103 deletions(-)
> >>
> >> --
> >> Chuck Lever
>
> --
> Chuck Lever
>
>

2021-03-23 17:27:09

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection



> On Mar 23, 2021, at 12:29 PM, Nagendra Tomar <[email protected]> wrote:
>
>>> On Mar 23, 2021, at 11:57 AM, Nagendra Tomar
>> <[email protected]> wrote:
>>>
>>>>> On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
>>>> <[email protected]> wrote:
>>>>>
>>>>> From: Nagendra S Tomar <[email protected]>
>>>>
>>>> The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
>>>> servers. In fact it was designed for exactly this kind of mix of
>>>> NFS versions.
>>>>
>>>> No client code change will be necessary -- there are a lot more
>>>> clients than servers. The MDS can be made to work smartly in
>>>> concert with the load balancer, over time; or it can adopt other
>>>> clever strategies.
>>>>
>>>> IMHO pNFS is the better long-term strategy here.
>>>
>>> The fundamental difference here is that the clustered NFSv3 server
>>> is available over a single virtual IP, so IIUC even if we were to use
>>> NFSv41 with flexfiles layout, all it can handover to the client is that single
>>> (load-balanced) virtual IP and now when the clients do connect to the
>>> NFSv3 DS we still have the same issue. Am I understanding you right?
>>> Can you pls elaborate what you mean by "MDS can be made to work
>>> smartly in concert with the load balancer"?
>>
>> I had thought there were multiple NFSv3 server targets in play.
>>
>> If the load balancer is making them look like a single IP address,
>> then take it out of the equation: expose all the NFSv3 servers to
>> the clients and let the MDS direct operations to each data server.
>>
>> AIUI this is the approach (without the use of NFSv3) taken by
>> NetApp next generation clusters.
>
> Yeah, if could have clients access all the NFSv3 servers then I agree, pNFS
> would be a viable option. Unfortunately that's not an option in this case. The
> cluster has 100's of nodes and it's not an on-prem server, but a cloud service,
> so the simplicity of the single LB VIP is critical.

The clients mount only the MDS. The MDS provides the DS addresses, they are
not exposed to client administrators. If the MDS adopts the load balancer's IP
address, then the clients would simply mount that same server address using
NFSv4.1.

The other alternative is to make the load balancer sniff the FH from each
NFS request and direct it to a consistent NFSv3 DS. I still prefer that
over adding a very special-case mount option to the Linux client. Again,
you'd be deploying a code change in one place, under your control, instead
of on 100's of clients.


--
Chuck Lever



2021-03-23 18:04:54

by Nagendra Tomar

[permalink] [raw]
Subject: RE: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

> > On Mar 23, 2021, at 12:29 PM, Nagendra Tomar
> <[email protected]> wrote:
> >
> >>> On Mar 23, 2021, at 11:57 AM, Nagendra Tomar
> >> <[email protected]> wrote:
> >>>
> >>>>> On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
> >>>> <[email protected]> wrote:
> >>>>>
> >>>>> From: Nagendra S Tomar <[email protected]>
> >>>>
> >>>> The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
> >>>> servers. In fact it was designed for exactly this kind of mix of
> >>>> NFS versions.
> >>>>
> >>>> No client code change will be necessary -- there are a lot more
> >>>> clients than servers. The MDS can be made to work smartly in
> >>>> concert with the load balancer, over time; or it can adopt other
> >>>> clever strategies.
> >>>>
> >>>> IMHO pNFS is the better long-term strategy here.
> >>>
> >>> The fundamental difference here is that the clustered NFSv3 server
> >>> is available over a single virtual IP, so IIUC even if we were to use
> >>> NFSv41 with flexfiles layout, all it can handover to the client is that single
> >>> (load-balanced) virtual IP and now when the clients do connect to the
> >>> NFSv3 DS we still have the same issue. Am I understanding you right?
> >>> Can you pls elaborate what you mean by "MDS can be made to work
> >>> smartly in concert with the load balancer"?
> >>
> >> I had thought there were multiple NFSv3 server targets in play.
> >>
> >> If the load balancer is making them look like a single IP address,
> >> then take it out of the equation: expose all the NFSv3 servers to
> >> the clients and let the MDS direct operations to each data server.
> >>
> >> AIUI this is the approach (without the use of NFSv3) taken by
> >> NetApp next generation clusters.
> >
> > Yeah, if could have clients access all the NFSv3 servers then I agree, pNFS
> > would be a viable option. Unfortunately that's not an option in this case. The
> > cluster has 100's of nodes and it's not an on-prem server, but a cloud service,
> > so the simplicity of the single LB VIP is critical.
>
> The clients mount only the MDS. The MDS provides the DS addresses, they are
> not exposed to client administrators. If the MDS adopts the load balancer's IP
> address, then the clients would simply mount that same server address using
> NFSv4.1.

I understand/agree with the "client mounts the single MDS IP" part. What I meant
by "simplicity of the single LB VIP" is to not having to have so many routable
IP addresses, since the clients could be on a (very) different network than the
storage cluster they are accessing, even though client admins will not deal with
those addresses themselves, as you mention.

>
> The other alternative is to make the load balancer sniff the FH from each
> NFS request and direct it to a consistent NFSv3 DS. I still prefer that
> over adding a very special-case mount option to the Linux client. Again,
> you'd be deploying a code change in one place, under your control, instead
> of on 100's of clients.

That is one option but that makes LB application aware and potentially less
performant. Appreciate your suggestion, though!
I was hoping that such a client side change could be useful to possibly more
users with similar setups, after all file->connection affinity doesn't sound too
arcane and one can think of benefits of one node processing one file. No?

>
>
> --
> Chuck Lever
>
>

2021-03-23 18:28:43

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection



> On Mar 23, 2021, at 2:01 PM, Nagendra Tomar <[email protected]> wrote:
>
>>> On Mar 23, 2021, at 12:29 PM, Nagendra Tomar
>> <[email protected]> wrote:
>>>
>>>>> On Mar 23, 2021, at 11:57 AM, Nagendra Tomar
>>>> <[email protected]> wrote:
>>>>>
>>>>>>> On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
>>>>>> <[email protected]> wrote:
>>>>>>>
>>>>>>> From: Nagendra S Tomar <[email protected]>
>>>>>>
>>>>>> The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
>>>>>> servers. In fact it was designed for exactly this kind of mix of
>>>>>> NFS versions.
>>>>>>
>>>>>> No client code change will be necessary -- there are a lot more
>>>>>> clients than servers. The MDS can be made to work smartly in
>>>>>> concert with the load balancer, over time; or it can adopt other
>>>>>> clever strategies.
>>>>>>
>>>>>> IMHO pNFS is the better long-term strategy here.
>>>>>
>>>>> The fundamental difference here is that the clustered NFSv3 server
>>>>> is available over a single virtual IP, so IIUC even if we were to use
>>>>> NFSv41 with flexfiles layout, all it can handover to the client is that single
>>>>> (load-balanced) virtual IP and now when the clients do connect to the
>>>>> NFSv3 DS we still have the same issue. Am I understanding you right?
>>>>> Can you pls elaborate what you mean by "MDS can be made to work
>>>>> smartly in concert with the load balancer"?
>>>>
>>>> I had thought there were multiple NFSv3 server targets in play.
>>>>
>>>> If the load balancer is making them look like a single IP address,
>>>> then take it out of the equation: expose all the NFSv3 servers to
>>>> the clients and let the MDS direct operations to each data server.
>>>>
>>>> AIUI this is the approach (without the use of NFSv3) taken by
>>>> NetApp next generation clusters.
>>>
>>> Yeah, if could have clients access all the NFSv3 servers then I agree, pNFS
>>> would be a viable option. Unfortunately that's not an option in this case. The
>>> cluster has 100's of nodes and it's not an on-prem server, but a cloud service,
>>> so the simplicity of the single LB VIP is critical.
>>
>> The clients mount only the MDS. The MDS provides the DS addresses, they are
>> not exposed to client administrators. If the MDS adopts the load balancer's IP
>> address, then the clients would simply mount that same server address using
>> NFSv4.1.
>
> I understand/agree with the "client mounts the single MDS IP" part. What I meant
> by "simplicity of the single LB VIP" is to not having to have so many routable
> IP addresses, since the clients could be on a (very) different network than the
> storage cluster they are accessing, even though client admins will not deal with
> those addresses themselves, as you mention.

Got it.


>> The other alternative is to make the load balancer sniff the FH from each
>> NFS request and direct it to a consistent NFSv3 DS. I still prefer that
>> over adding a very special-case mount option to the Linux client. Again,
>> you'd be deploying a code change in one place, under your control, instead
>> of on 100's of clients.
>
> That is one option but that makes LB application aware and potentially less
> performant. Appreciate your suggestion, though!

You might get part of the way there by having the LB direct
traffic from a particular client to a particular backend NFS
server. The client and its applications are bound to have a
narrow file working set.


> I was hoping that such a client side change could be useful to possibly more
> users with similar setups, after all file->connection affinity doesn't sound too
> arcane and one can think of benefits of one node processing one file. No?

That's where I'm getting hung up (outside the personal preference
that we not introduce yes another mount option). While I understand
what's going on now (thanks!) I'm not sure this is a common usage
scenario for NFSv3. Other opinions welcome here!

Nor does it seem like one that we want to encourage over solutions
like pNFS. Generally the Linux community has taken the position
that server bugs should be addressed on the server, and this seems
like a problem that is introduced by your middlebox and server
combination. The client is working properly and is complying with
spec.

If the server cluster prefers particular requests to go to particular
targets, a layout is the way to go, IMHO.

(I'm not speaking for the NFS client maintainers, just offering an
opinion and hoping my comments clarify the scenario for others on
the list paying attention to this thread).

--
Chuck Lever



2021-03-24 08:38:17

by Nagendra Tomar

[permalink] [raw]
Subject: RE: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

> > On Mar 23, 2021, at 2:01 PM, Nagendra Tomar
> <[email protected]> wrote:
> >
> >> The other alternative is to make the load balancer sniff the FH from each
> >> NFS request and direct it to a consistent NFSv3 DS. I still prefer that
> >> over adding a very special-case mount option to the Linux client. Again,
> >> you'd be deploying a code change in one place, under your control, instead
> >> of on 100's of clients.
> >
> > That is one option but that makes LB application aware and potentially less
> > performant. Appreciate your suggestion, though!
>
> You might get part of the way there by having the LB direct
> traffic from a particular client to a particular backend NFS
> server. The client and its applications are bound to have a
> narrow file working set.

Yes, with the limitation that one client will only be served by one cluster
node. This is not as good as distributing different files to different nodes,
which would get the highest aggregate throughput/IOps possible.

>
>
> > I was hoping that such a client side change could be useful to possibly more
> > users with similar setups, after all file->connection affinity doesn't sound too
> > arcane and one can think of benefits of one node processing one file. No?
>
> That's where I'm getting hung up (outside the personal preference
> that we not introduce yes another mount option). While I understand
> what's going on now (thanks!) I'm not sure this is a common usage
> scenario for NFSv3. Other opinions welcome here!
>
> Nor does it seem like one that we want to encourage over solutions
> like pNFS. Generally the Linux community has taken the position
> that server bugs should be addressed on the server, and this seems
> like a problem that is introduced by your middlebox and server
> combination.

I would like to look at it not as a problem created by our server setup,
but rather as "one more scenario" which the client can much easily and
generically handle and hence the patch.

> The client is working properly and is complying with spec.

The nconnect roundrobin distribution is just one way of utilizing multiple
connections, which happens to be limiting for this specific usecase.
My patch proposes another way of distributing RPCs over the connections,
which is more suitable for this usecase and maybe others. No violation of
any spec ????

>
> If the server cluster prefers particular requests to go to particular
> targets, a layout is the way to go, IMHO.
>
> (I'm not speaking for the NFS client maintainers, just offering an
> opinion and hoping my comments clarify the scenario for others on
> the list paying attention to this thread).

Appreciate your comments, thanks!
It's always great to hear from other well-informed users.

>
> --
> Chuck Lever
>
>

2021-03-24 13:32:22

by Tom Talpey

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

On 3/23/2021 12:14 PM, Chuck Lever III wrote:
>
>
>> On Mar 23, 2021, at 11:57 AM, Nagendra Tomar <[email protected]> wrote:
>>
>>>> On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
>>> <[email protected]> wrote:
>>>>
>>>> From: Nagendra S Tomar <[email protected]>
>>>>
>>>> If a clustered NFS server is behind an L4 loadbalancer the default
>>>> nconnect roundrobin policy may cause RPC requests to a file to be
>>>> sent to different cluster nodes. This is because the source port
>>>> would be different for all the nconnect connections.
>>>> While this should functionally work (since the cluster will usually
>>>> have a consistent view irrespective of which node is serving the
>>>> request), it may not be desirable from performance pov. As an
>>>> example we have an NFSv3 frontend to our Object store, where every
>>>> NFSv3 file is an object. Now if writes to the same file are sent
>>>> roundrobin to different cluster nodes, the writes become very
>>>> inefficient due to the consistency requirement for object update
>>>> being done from different nodes.
>>>> Similarly each node may maintain some kind of cache to serve the file
>>>> data/metadata requests faster and even in that case it helps to have
>>>> a xprt affinity for a file/dir.
>>>> In general we have seen such scheme to scale very well.
>>>>
>>>> This patch introduces a new rpc_xprt_iter_ops for using an additional
>>>> u32 (filehandle hash) to affine RPCs to the same file to one xprt.
>>>> It adds a new mount option "ncpolicy=roundrobin|hash" which can be
>>>> used to select the nconnect multipath policy for a given mount and
>>>> pass the selected policy to the RPC client.
>>>
>>> This sets off my "not another administrative knob that has
>>> to be tested and maintained, and can be abused" allergy.
>>>
>>> Also, my "because connections are shared by mounts of the same
>>> server, all those mounts will all adopt this behavior" rhinitis.
>>
>> Yes, it's fair to call this out, but ncpolicy behaves like the nconnect
>> parameter in this regards.
>>
>>> And my "why add a new feature to a legacy NFS version" hives.
>>>
>>>
>>> I agree that your scenario can and should be addressed somehow.
>>> I'd really rather see this done with pNFS.
>>>
>>> Since you are proposing patches against the upstream NFS client,
>>> I presume all your clients /can/ support NFSv4.1+. It's the NFS
>>> servers that are stuck on NFSv3, correct?
>>
>> Yes.
>>
>>>
>>> The flexfiles layout can handle an NFSv4.1 client and NFSv3 data
>>> servers. In fact it was designed for exactly this kind of mix of
>>> NFS versions.
>>>
>>> No client code change will be necessary -- there are a lot more
>>> clients than servers. The MDS can be made to work smartly in
>>> concert with the load balancer, over time; or it can adopt other
>>> clever strategies.
>>>
>>> IMHO pNFS is the better long-term strategy here.
>>
>> The fundamental difference here is that the clustered NFSv3 server
>> is available over a single virtual IP, so IIUC even if we were to use
>> NFSv41 with flexfiles layout, all it can handover to the client is that single
>> (load-balanced) virtual IP and now when the clients do connect to the
>> NFSv3 DS we still have the same issue. Am I understanding you right?
>> Can you pls elaborate what you mean by "MDS can be made to work
>> smartly in concert with the load balancer"?
>
> I had thought there were multiple NFSv3 server targets in play.
>
> If the load balancer is making them look like a single IP address,
> then take it out of the equation: expose all the NFSv3 servers to
> the clients and let the MDS direct operations to each data server.
>
> AIUI this is the approach (without the use of NFSv3) taken by
> NetApp next generation clusters.

It certainly sounds like the load balancer is actually performing a
storage router function here, and roundrobin is going to thrash that
badly. I'm not sure that exposing a magic "hash" knob is a very good
solution though. Pushing decisions to the sysadmin is rarely a great
approach.

Why not simply argue that "hash" is the better algorithm, and prove
that it be the default? Is that not the case?

Tom.

>>>> It adds a new rpc_procinfo member p_fhhash, which can be supplied
>>>> by the specific RPC programs to return a u32 hash of the file/dir the
>>>> RPC is targetting, and lastly it provides p_fhhash implementation
>>>> for various NFS v3/v4/v41/v42 RPCs to generate the hash correctly.
>>>>
>>>> Thoughts?
>>>>
>>>> Thanks,
>>>> Tomar
>>>>
>>>> Nagendra S Tomar (5):
>>>> SUNRPC: Add a new multipath xprt policy for xprt selection based
>>>> on target filehandle hash
>>>> SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent the
>>> nconnect
>>>> policy and pass it down from mount option to rpc layer
>>>> SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN ->
>>> RPC_TASK_USE_MAIN_XPRT
>>>> NFSv3: Add hash computation methods for NFSv3 RPCs
>>>> NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
>>>>
>>>> fs/nfs/client.c | 3 +
>>>> fs/nfs/fs_context.c | 26 ++
>>>> fs/nfs/internal.h | 2 +
>>>> fs/nfs/nfs3client.c | 4 +-
>>>> fs/nfs/nfs3xdr.c | 154 +++++++++++
>>>> fs/nfs/nfs42xdr.c | 112 ++++++++
>>>> fs/nfs/nfs4client.c | 14 +-
>>>> fs/nfs/nfs4proc.c | 18 +-
>>>> fs/nfs/nfs4xdr.c | 516 ++++++++++++++++++++++++++++++-----
>>>> fs/nfs/super.c | 7 +-
>>>> include/linux/nfs_fs_sb.h | 1 +
>>>> include/linux/sunrpc/clnt.h | 15 +
>>>> include/linux/sunrpc/sched.h | 2 +-
>>>> include/linux/sunrpc/xprtmultipath.h | 9 +-
>>>> include/trace/events/sunrpc.h | 4 +-
>>>> net/sunrpc/clnt.c | 38 ++-
>>>> net/sunrpc/xprtmultipath.c | 91 +++++-
>>>> 17 files changed, 913 insertions(+), 103 deletions(-)
>>>
>>> --
>>> Chuck Lever
>
> --
> Chuck Lever
>
>
>
>

2021-03-24 14:13:38

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

On Wed, 2021-03-24 at 09:23 -0400, Tom Talpey wrote:
> On 3/23/2021 12:14 PM, Chuck Lever III wrote:
> >
> >
> > > On Mar 23, 2021, at 11:57 AM, Nagendra Tomar <
> > > [email protected]> wrote:
> > >
> > > > > On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
> > > > <[email protected]> wrote:
> > > > >
> > > > > From: Nagendra S Tomar <[email protected]>
> > > > >
> > > > > If a clustered NFS server is behind an L4 loadbalancer the
> > > > > default
> > > > > nconnect roundrobin policy may cause RPC requests to a file
> > > > > to be
> > > > > sent to different cluster nodes. This is because the source
> > > > > port
> > > > > would be different for all the nconnect connections.
> > > > > While this should functionally work (since the cluster will
> > > > > usually
> > > > > have a consistent view irrespective of which node is serving
> > > > > the
> > > > > request), it may not be desirable from performance pov. As an
> > > > > example we have an NFSv3 frontend to our Object store, where
> > > > > every
> > > > > NFSv3 file is an object. Now if writes to the same file are
> > > > > sent
> > > > > roundrobin to different cluster nodes, the writes become very
> > > > > inefficient due to the consistency requirement for object
> > > > > update
> > > > > being done from different nodes.
> > > > > Similarly each node may maintain some kind of cache to serve
> > > > > the file
> > > > > data/metadata requests faster and even in that case it helps
> > > > > to have
> > > > > a xprt affinity for a file/dir.
> > > > > In general we have seen such scheme to scale very well.
> > > > >
> > > > > This patch introduces a new rpc_xprt_iter_ops for using an
> > > > > additional
> > > > > u32 (filehandle hash) to affine RPCs to the same file to one
> > > > > xprt.
> > > > > It adds a new mount option "ncpolicy=roundrobin|hash" which
> > > > > can be
> > > > > used to select the nconnect multipath policy for a given
> > > > > mount and
> > > > > pass the selected policy to the RPC client.
> > > >
> > > > This sets off my "not another administrative knob that has
> > > > to be tested and maintained, and can be abused" allergy.
> > > >
> > > > Also, my "because connections are shared by mounts of the same
> > > > server, all those mounts will all adopt this behavior"
> > > > rhinitis.
> > >
> > > Yes, it's fair to call this out, but ncpolicy behaves like the
> > > nconnect
> > > parameter in this regards.
> > >
> > > > And my "why add a new feature to a legacy NFS version" hives.
> > > >
> > > >
> > > > I agree that your scenario can and should be addressed somehow.
> > > > I'd really rather see this done with pNFS.
> > > >
> > > > Since you are proposing patches against the upstream NFS
> > > > client,
> > > > I presume all your clients /can/ support NFSv4.1+. It's the NFS
> > > > servers that are stuck on NFSv3, correct?
> > >
> > > Yes.
> > >
> > > >
> > > > The flexfiles layout can handle an NFSv4.1 client and NFSv3
> > > > data
> > > > servers. In fact it was designed for exactly this kind of mix
> > > > of
> > > > NFS versions.
> > > >
> > > > No client code change will be necessary -- there are a lot more
> > > > clients than servers. The MDS can be made to work smartly in
> > > > concert with the load balancer, over time; or it can adopt
> > > > other
> > > > clever strategies.
> > > >
> > > > IMHO pNFS is the better long-term strategy here.
> > >
> > > The fundamental difference here is that the clustered NFSv3
> > > server
> > > is available over a single virtual IP, so IIUC even if we were to
> > > use
> > > NFSv41 with flexfiles layout, all it can handover to the client
> > > is that single
> > > (load-balanced) virtual IP and now when the clients do connect to
> > > the
> > > NFSv3 DS we still have the same issue. Am I understanding you
> > > right?
> > > Can you pls elaborate what you mean by "MDS can be made to work
> > > smartly in concert with the load balancer"?
> >
> > I had thought there were multiple NFSv3 server targets in play.
> >
> > If the load balancer is making them look like a single IP address,
> > then take it out of the equation: expose all the NFSv3 servers to
> > the clients and let the MDS direct operations to each data server.
> >
> > AIUI this is the approach (without the use of NFSv3) taken by
> > NetApp next generation clusters.
>
> It certainly sounds like the load balancer is actually performing a
> storage router function here, and roundrobin is going to thrash that
> badly. I'm not sure that exposing a magic "hash" knob is a very good
> solution though. Pushing decisions to the sysadmin is rarely a great
> approach.
>
> Why not simply argue that "hash" is the better algorithm, and prove
> that it be the default? Is that not the case?
>
>

It's not, no. So we're not making that a default.

Pushing all the I/O to a single file through a single TCP only makes
sense if you have a setup like the one Nagendra is describing.
Otherwise, you're better off spreading it across multiple connections
(assuming that you have multiple NICs).

> Tom.
>
> > > > > It adds a new rpc_procinfo member p_fhhash, which can be
> > > > > supplied
> > > > > by the specific RPC programs to return a u32 hash of the
> > > > > file/dir the
> > > > > RPC is targetting, and lastly it provides p_fhhash
> > > > > implementation
> > > > > for various NFS v3/v4/v41/v42 RPCs to generate the hash
> > > > > correctly.
> > > > >
> > > > > Thoughts?
> > > > >
> > > > > Thanks,
> > > > > Tomar
> > > > >
> > > > > Nagendra S Tomar (5):
> > > > > SUNRPC: Add a new multipath xprt policy for xprt selection
> > > > > based
> > > > >    on target filehandle hash
> > > > > SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent
> > > > > the
> > > > nconnect
> > > > >    policy and pass it down from mount option to rpc layer
> > > > > SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN ->
> > > > RPC_TASK_USE_MAIN_XPRT
> > > > > NFSv3: Add hash computation methods for NFSv3 RPCs
> > > > > NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
> > > > >
> > > > > fs/nfs/client.c                      |   3 +
> > > > > fs/nfs/fs_context.c                  |  26 ++
> > > > > fs/nfs/internal.h                    |   2 +
> > > > > fs/nfs/nfs3client.c                  |   4 +-
> > > > > fs/nfs/nfs3xdr.c                     | 154 +++++++++++
> > > > > fs/nfs/nfs42xdr.c                    | 112 ++++++++
> > > > > fs/nfs/nfs4client.c                  |  14 +-
> > > > > fs/nfs/nfs4proc.c                    |  18 +-
> > > > > fs/nfs/nfs4xdr.c                     | 516
> > > > > ++++++++++++++++++++++++++++++-----
> > > > > fs/nfs/super.c                       |   7 +-
> > > > > include/linux/nfs_fs_sb.h            |   1 +
> > > > > include/linux/sunrpc/clnt.h          |  15 +
> > > > > include/linux/sunrpc/sched.h         |   2 +-
> > > > > include/linux/sunrpc/xprtmultipath.h |   9 +-
> > > > > include/trace/events/sunrpc.h        |   4 +-
> > > > > net/sunrpc/clnt.c                    |  38 ++-
> > > > > net/sunrpc/xprtmultipath.c           |  91 +++++-
> > > > > 17 files changed, 913 insertions(+), 103 deletions(-)
> > > >
> > > > --
> > > > Chuck Lever
> >
> > --
> > Chuck Lever
> >
> >
> >
> >

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]


2021-03-24 14:36:58

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection



> On Mar 23, 2021, at 7:31 PM, Nagendra Tomar <[email protected]> wrote:
>>
>>> I was hoping that such a client side change could be useful to possibly more
>>> users with similar setups, after all file->connection affinity doesn't sound too
>>> arcane and one can think of benefits of one node processing one file. No?
>>
>> That's where I'm getting hung up (outside the personal preference
>> that we not introduce yes another mount option). While I understand
>> what's going on now (thanks!) I'm not sure this is a common usage
>> scenario for NFSv3. Other opinions welcome here!
>>
>> Nor does it seem like one that we want to encourage over solutions
>> like pNFS. Generally the Linux community has taken the position
>> that server bugs should be addressed on the server, and this seems
>> like a problem that is introduced by your middlebox and server
>> combination.
>
> I would like to look at it not as a problem created by our server setup,
> but rather as "one more scenario" which the client can much easily and
> generically handle and hence the patch.
>
>> The client is working properly and is complying with spec.
>
> The nconnect roundrobin distribution is just one way of utilizing multiple
> connections, which happens to be limiting for this specific usecase.
> My patch proposes another way of distributing RPCs over the connections,
> which is more suitable for this usecase and maybe others.

Indeed, the nconnect work isn't quite complete, and the client will
need some way to specify how to schedule RPCs over several connections
to the same server. There seems to be two somewhat orthogonal
components to your proposal:

A. The introduction of a mount option to specify an RPC connection
scheduling mechanism

B. The use of a file handle hash to do that scheduling


For A: Again, I'd rather avoid adding more mount options, for reasons
I've described most recently over in the d_type/READDIR thread. There
are other options here. Anna has proposed a sysfs API that exposes
each kernel RPC connection for fine-grained control. See this thread:

https://lore.kernel.org/linux-nfs/[email protected]/

Dan Aloni has proposed an additional mechanism that enables user space
to associate an NFS mount point to its underlying RPC connections.

These approaches might be suitable for your purpose, or they might
only be a little inspiration to get creative.


For B: I agree with Tom that leaving this up to client system
administrators is a punt and usually not a scalable or future-looking
solution.

And I maintain you will be better off with a centralized and easily
configurable mechanism for balancing load, not a fixed algorithm that
you have to introduce to your clients via code changes or repeated
distributed changes to mount options.


There are other ways to utilize your LB. Since this is NFSv3, you
might expose your back-end NFSv3 servers by destination port (aka,
a set of NAT rules).

MDS NFSv4 server: clients get to it at the VIP address, port 2049
DS NFSv3 server A: clients get to it at the VIP address, port i
DS NFSv3 server B: clients get to it at the VIP address, port j
DS NFSv3 server C: clients get to it at the VIP address, port k

The LB translates [VIP]:i into [server A]:2049, [VIP]:j into
[server B]:2049, and so on.

I'm not sure if the flexfiles layout carries universal addresses with
port information, though. If it did, that would enable you to expose
all your backend data servers directly to clients via a single VIP,
and yet the LB would still be just a Layer 3 forwarding service and
not application-aware.


--
Chuck Lever



2021-03-25 03:36:00

by Nagendra Tomar

[permalink] [raw]
Subject: RE: [PATCH 0/5] nfs: Add mount option for forcing RPC requests for one file over one connection

> From: Trond Myklebust <[email protected]>
> Sent: 24 March 2021 19:42
> On Wed, 2021-03-24 at 09:23 -0400, Tom Talpey wrote:
> > On 3/23/2021 12:14 PM, Chuck Lever III wrote:
> > >
> > >
> > > > On Mar 23, 2021, at 11:57 AM, Nagendra Tomar <
> > > > [email protected]> wrote:
> > > >
> > > > > > On Mar 23, 2021, at 1:46 AM, Nagendra Tomar
> > > > > <[email protected]> wrote:
> > > > > >
> > > > > > From: Nagendra S Tomar <[email protected]>
> > > > > >
> > > > > > If a clustered NFS server is behind an L4 loadbalancer the
> > > > > > default
> > > > > > nconnect roundrobin policy may cause RPC requests to a file
> > > > > > to be
> > > > > > sent to different cluster nodes. This is because the source
> > > > > > port
> > > > > > would be different for all the nconnect connections.
> > > > > > While this should functionally work (since the cluster will
> > > > > > usually
> > > > > > have a consistent view irrespective of which node is serving
> > > > > > the
> > > > > > request), it may not be desirable from performance pov. As an
> > > > > > example we have an NFSv3 frontend to our Object store, where
> > > > > > every
> > > > > > NFSv3 file is an object. Now if writes to the same file are
> > > > > > sent
> > > > > > roundrobin to different cluster nodes, the writes become very
> > > > > > inefficient due to the consistency requirement for object
> > > > > > update
> > > > > > being done from different nodes.
> > > > > > Similarly each node may maintain some kind of cache to serve
> > > > > > the file
> > > > > > data/metadata requests faster and even in that case it helps
> > > > > > to have
> > > > > > a xprt affinity for a file/dir.
> > > > > > In general we have seen such scheme to scale very well.
> > > > > >
> > > > > > This patch introduces a new rpc_xprt_iter_ops for using an
> > > > > > additional
> > > > > > u32 (filehandle hash) to affine RPCs to the same file to one
> > > > > > xprt.
> > > > > > It adds a new mount option "ncpolicy=roundrobin|hash" which
> > > > > > can be
> > > > > > used to select the nconnect multipath policy for a given
> > > > > > mount and
> > > > > > pass the selected policy to the RPC client.
> > > > >
> > > > > This sets off my "not another administrative knob that has
> > > > > to be tested and maintained, and can be abused" allergy.
> > > > >
> > > > > Also, my "because connections are shared by mounts of the same
> > > > > server, all those mounts will all adopt this behavior"
> > > > > rhinitis.
> > > >
> > > > Yes, it's fair to call this out, but ncpolicy behaves like the
> > > > nconnect
> > > > parameter in this regards.
> > > >
> > > > > And my "why add a new feature to a legacy NFS version" hives.
> > > > >
> > > > >
> > > > > I agree that your scenario can and should be addressed somehow.
> > > > > I'd really rather see this done with pNFS.
> > > > >
> > > > > Since you are proposing patches against the upstream NFS
> > > > > client,
> > > > > I presume all your clients /can/ support NFSv4.1+. It's the NFS
> > > > > servers that are stuck on NFSv3, correct?
> > > >
> > > > Yes.
> > > >
> > > > >
> > > > > The flexfiles layout can handle an NFSv4.1 client and NFSv3
> > > > > data
> > > > > servers. In fact it was designed for exactly this kind of mix
> > > > > of
> > > > > NFS versions.
> > > > >
> > > > > No client code change will be necessary -- there are a lot more
> > > > > clients than servers. The MDS can be made to work smartly in
> > > > > concert with the load balancer, over time; or it can adopt
> > > > > other
> > > > > clever strategies.
> > > > >
> > > > > IMHO pNFS is the better long-term strategy here.
> > > >
> > > > The fundamental difference here is that the clustered NFSv3
> > > > server
> > > > is available over a single virtual IP, so IIUC even if we were to
> > > > use
> > > > NFSv41 with flexfiles layout, all it can handover to the client
> > > > is that single
> > > > (load-balanced) virtual IP and now when the clients do connect to
> > > > the
> > > > NFSv3 DS we still have the same issue. Am I understanding you
> > > > right?
> > > > Can you pls elaborate what you mean by "MDS can be made to work
> > > > smartly in concert with the load balancer"?
> > >
> > > I had thought there were multiple NFSv3 server targets in play.
> > >
> > > If the load balancer is making them look like a single IP address,
> > > then take it out of the equation: expose all the NFSv3 servers to
> > > the clients and let the MDS direct operations to each data server.
> > >
> > > AIUI this is the approach (without the use of NFSv3) taken by
> > > NetApp next generation clusters.
> >
> > It certainly sounds like the load balancer is actually performing a
> > storage router function here, and roundrobin is going to thrash that
> > badly. I'm not sure that exposing a magic "hash" knob is a very good
> > solution though. Pushing decisions to the sysadmin is rarely a great
> > approach.
> >
> > Why not simply argue that "hash" is the better algorithm, and prove
> > that it be the default? Is that not the case?
> >
> >
>
> It's not, no. So we're not making that a default.
>
> Pushing all the I/O to a single file through a single TCP only makes
> sense if you have a setup like the one Nagendra is describing.
> Otherwise, you're better off spreading it across multiple connections
> (assuming that you have multiple NICs).

Yes, the multiple nconnect connections primarily allow us to distribute RPCs
to a single mount over multiple slaves/nics of a bonded/aggregated interface.
If they all terminate at the same storage server, we can even stripe RPCs to
one file over multiple nics, which helps to scale even IOs to a single file.
The hash based distribution would limit one file to one connection, though
it'll help scale operations across many files.
As we can see, the former is desirable (for most common setups) and hence
should be the default.

>
> > Tom.
> >
> > > > > > It adds a new rpc_procinfo member p_fhhash, which can be
> > > > > > supplied
> > > > > > by the specific RPC programs to return a u32 hash of the
> > > > > > file/dir the
> > > > > > RPC is targetting, and lastly it provides p_fhhash
> > > > > > implementation
> > > > > > for various NFS v3/v4/v41/v42 RPCs to generate the hash
> > > > > > correctly.
> > > > > >
> > > > > > Thoughts?
> > > > > >
> > > > > > Thanks,
> > > > > > Tomar
> > > > > >
> > > > > > Nagendra S Tomar (5):
> > > > > > SUNRPC: Add a new multipath xprt policy for xprt selection
> > > > > > based
> > > > > >    on target filehandle hash
> > > > > > SUNRPC/NFSv3/NFSv4: Introduce "enum ncpolicy" to represent
> > > > > > the
> > > > > nconnect
> > > > > >    policy and pass it down from mount option to rpc layer
> > > > > > SUNRPC/NFSv4: Rename RPC_TASK_NO_ROUND_ROBIN ->
> > > > > RPC_TASK_USE_MAIN_XPRT
> > > > > > NFSv3: Add hash computation methods for NFSv3 RPCs
> > > > > > NFSv4: Add hash computation methods for NFSv4/NFSv42 RPCs
> > > > > >
> > > > > > fs/nfs/client.c                      |   3 +
> > > > > > fs/nfs/fs_context.c                  |  26 ++
> > > > > > fs/nfs/internal.h                    |   2 +
> > > > > > fs/nfs/nfs3client.c                  |   4 +-
> > > > > > fs/nfs/nfs3xdr.c                     | 154 +++++++++++
> > > > > > fs/nfs/nfs42xdr.c                    | 112 ++++++++
> > > > > > fs/nfs/nfs4client.c                  |  14 +-
> > > > > > fs/nfs/nfs4proc.c                    |  18 +-
> > > > > > fs/nfs/nfs4xdr.c                     | 516
> > > > > > ++++++++++++++++++++++++++++++-----
> > > > > > fs/nfs/super.c                       |   7 +-
> > > > > > include/linux/nfs_fs_sb.h            |   1 +
> > > > > > include/linux/sunrpc/clnt.h          |  15 +
> > > > > > include/linux/sunrpc/sched.h         |   2 +-
> > > > > > include/linux/sunrpc/xprtmultipath.h |   9 +-
> > > > > > include/trace/events/sunrpc.h        |   4 +-
> > > > > > net/sunrpc/clnt.c                    |  38 ++-
> > > > > > net/sunrpc/xprtmultipath.c           |  91 +++++-
> > > > > > 17 files changed, 913 insertions(+), 103 deletions(-)
> > > > >
> > > > > --
> > > > > Chuck Lever
> > >
> > > --
> > > Chuck Lever
> > >
> > >
> > >
> > >
>
> --
> Trond Myklebust
> Linux NFS client maintainer, Hammerspace
> [email protected]
>