This patchset represents an update to the previously published
version. The most significant changes are a renaming of the
transport switch data structures and functions based on a
recommendation from Chuck Lever. Code cleanup was also done in the
portlist implementation based on feedback from Trond. Additionally,
the synopsis for the register/unregister services were changed
to return errors and module reference counting was added.
I've included the original description below for new reviewers.
This patchset implements a sunrpc server side pluggable transport
switch that supports dynamically registered transports.
The knfsd daemon has been modified to allow user-mode programs
to add a new listening endpoint by writing a string
to the portlist file. The format of the string is as follows:
<transport-name> <port>
For example,
# echo rdma 2050 > /proc/fs/nfsd/portlist
Will cause the knfsd daemon to attempt to add a listening endpoint on
port 2050 using the 'rdma' transport.
Transports register themselves with the transport switch using a
new API that has the following synopsis:
int svc_register_transport(struct svc_sock_ops *xprt)
The text transport name is contained in a field in the xprt structure.
A new service has been added as well to take a transport name
instead of an IP protocol number to specify the transport on which the
listening endpoint is to be created. This function is defined as follows:
int svc_create_svcsock(struct svc_serv, char *transport_name,
unsigned short port, int flags);
The existing svc_makesock interface was left to avoid impacts to existing
servers. It has been modified to map IP protocol numbers to transport
strings.
--
Signed-off-by: Tom Tucker <[email protected]>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
At 05:12 PM 8/30/2007, Chuck Lever wrote:
>Version 2 of the rpcbind protocol doesn't take a netid. To register a
>netid with the local portmapper, you need two things to happen:
>
>1. The local portmapper needs to be something that can take v3 or v4
>registration requests
>
>2. rpcb_register needs to be converted to use v3 or v4 of the rpcbind
>protocol to do the registering.
>
>3. Mr. Talpey needs to submit his patch to make RPC transport
>implementations support RPC_DISPLAY_NETID so the rpcbind client has a
>proper netid string to send to the local rpcbind daemon.
You said "two" things. :-)
Yes, I'm working on the rpcbind path a bit. Hope to have some patches
to it and the xprt framework out for comments tomorrow, along with some
new NFS/RDMA code.
Tom.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Mon, Aug 20, 2007 at 11:20:00AM -0500, Tom Tucker wrote:
> This patchset represents an update to the previously published
> version.
git-am whines about whitespace in a few places (space before tab, etc.);
could you run these all through checkpatch.pl?
--b.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
At 12:50 PM 8/29/2007, Chuck Lever wrote:
>I think the client and server should use the *same* mechanism for
>matching transports -- either a string *or* a protocol number (not
>necessarily the IPPROTO number, but something mapped to it, so we can
>add transport types like RDMA that don't have an IPPROTO number already).
>
>How do others feel about this? String, or protocol number?
I think the string is the better approach, because it's self-describing
and maps nicely to the transport. Integers are more like the existing
code, however, and (very) easy to deal with.
At the moment, my code implements the integer method and it's
relatively clean, except for one thing. In nfs_show_mount_options()
(fs/nfs/super.c), the kernel attempts to print the mount options'
proto= setting based on the naked IPPROTO. This fails for rdma.
With a string, all it would have to do is copy the string.
Tom.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Mon, Aug 20, 2007 at 11:23:25AM -0500, Tom Tucker wrote:
> @@ -15,6 +15,18 @@ struct svc_xprt {
> const char *xpt_name;
> int (*xpt_recvfrom)(struct svc_rqst *rqstp);
> int (*xpt_sendto)(struct svc_rqst *rqstp);
> + /*
> + * Detach the svc_sock from it's socket, so that the
s/it's/its/
--b.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
[...snip...]
>>
>> For example,
>>
>> # echo rdma 2050 > /proc/fs/nfsd/portlist
>>
>> Will cause the knfsd daemon to attempt to add a listening endpoint on
>> port 2050 using the 'rdma' transport.
>
> Does this also register the new endpoint with the server's portmapper,
> or is there an additional step required for this? At least for RDMA I
> assume the portmapper registration is not needed, but other transports
> may want it.
>
No it doesn't, but IMO it needs to. The reason it doesn't is that the
current portmapper registration service doesn't look at the netid, it looks
at the protocol number and maps that to a netid internally. I think this is
broken, but that's what it does. Since RMDA doesn't have an IP protocol, I
couldn't map it with the current services.
I think the right thing to do is to modify the rpcb_register service to take
a netid and set it in the rpc_msg, a modify the portmapper to honor the
netid specified.
>> Transports register themselves with the transport switch using a
>> new API that has the following synopsis:
>>
>> int svc_register_transport(struct svc_sock_ops *xprt)
>
> As before, the "sock" in svc_sock_ops might be better left out for what
> is ostensibly a generic data structure.
>
Agreed. I plan to rename all transport independent services svc_xprt_xxxx,
and move all transport independent services currently in svcsock.c to
svc_xprt.c.
>> The text transport name is contained in a field in the xprt structure.
>> A new service has been added as well to take a transport name
>> instead of an IP protocol number to specify the transport on which the
>> listening endpoint is to be created. This function is defined as follows:
>>
>> int svc_create_svcsock(struct svc_serv, char *transport_name,
>> unsigned short port, int flags);
>
> Again, "sock" should be sensibly excised from the function name.
>
>> The existing svc_makesock interface was left to avoid impacts to existing
>> servers. It has been modified to map IP protocol numbers to transport
>> strings.
>
> I think the client and server should use the *same* mechanism for
> matching transports -- either a string *or* a protocol number (not
> necessarily the IPPROTO number, but something mapped to it, so we can
> add transport types like RDMA that don't have an IPPROTO number already).
>
> How do others feel about this? String, or protocol number?
String.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On 8/29/07 12:15 PM, "Chuck Lever" <[email protected]> wrote:
> Tom Tucker wrote:
>> Add a transport function that prepares the transport specific header for
>> RPC replies. UDP has none, TCP has a 4B record length. This will
>> allow the RDMA transport to prepare it's variable length reply
>> header as well.
>>
>> Signed-off-by: Tom Tucker <[email protected]>
>> ---
>>
>> include/linux/sunrpc/svcsock.h | 4 ++++
>> net/sunrpc/svc.c | 8 +++++---
>> net/sunrpc/svcsock.c | 15 +++++++++++++++
>> 3 files changed, 24 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
>> index 27c5b1f..1da42c2 100644
>> --- a/include/linux/sunrpc/svcsock.h
>> +++ b/include/linux/sunrpc/svcsock.h
>> @@ -27,6 +27,10 @@ struct svc_xprt {
>> * destruction of a svc_sock.
>> */
>> void (*xpt_free)(struct svc_sock *);
>> + /*
>> + * Prepare any transport-specific RPC header.
>> + */
>> + int (*xpt_prep_reply_hdr)(struct svc_rqst *);
>> };
>
> A shorter name for this one might be nice, but I know it's kind of hard
> to choose one.
>
>> /*
>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> index e673ef9..72a900f 100644
>> --- a/net/sunrpc/svc.c
>> +++ b/net/sunrpc/svc.c
>> @@ -815,9 +815,11 @@ svc_process(struct svc_rqst *rqstp)
>> rqstp->rq_res.tail[0].iov_len = 0;
>> /* Will be turned off only in gss privacy case: */
>> rqstp->rq_sendfile_ok = 1;
>> - /* tcp needs a space for the record length... */
>> - if (rqstp->rq_prot == IPPROTO_TCP)
>> - svc_putnl(resv, 0);
>> +
>> + /* setup response header. */
>> + if (rqstp->rq_sock->sk_xprt->xpt_prep_reply_hdr &&
>> + rqstp->rq_sock->sk_xprt->xpt_prep_reply_hdr(rqstp))
>> + goto dropit;
>
> Although others might disagree, I like making all the transport methods
> mandatory.
>
I think so too. Especially if some are mandatory and some are not.
> The TCP transport will be the common case here, and that will always
> have to do both the existence check and the call.
>
> What is the purpose of the return code? Does the RDMA header
> constructor return something other than zero, and if so, why?
>
The thought was that some transports might need to do something (allocate
memory) that might fail. For the current set of transports, they never fail.
This is probably over-design. Make it void?
> And finally a dumb style nit... I generally don't like the extra comment
> here -- it restates what is already clear.
>
True. Unless we shorten the function name to something cryptic ;-)
>> rqstp->rq_xid = svc_getu32(argv);
>> svc_putu32(resv, rqstp->rq_xid);
>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>> index 4956c88..ca473ee 100644
>> --- a/net/sunrpc/svcsock.c
>> +++ b/net/sunrpc/svcsock.c
>> @@ -1326,12 +1326,27 @@ svc_tcp_sendto(struct svc_rqst *rqstp)
>> return sent;
>> }
>>
>> +/*
>> + * Setup response header. TCP has a 4B record length field.
>> + */
>> +static int
>> +svc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
>> +{
>> + struct kvec *resv = &rqstp->rq_res.head[0];
>> +
>> + /* tcp needs a space for the record length... */
>> + svc_putnl(resv, 0);
>> +
>> + return 0;
>> +}
>> +
>> static const struct svc_xprt svc_tcp_xprt = {
>> .xpt_name = "tcp",
>> .xpt_recvfrom = svc_tcp_recvfrom,
>> .xpt_sendto = svc_tcp_sendto,
>> .xpt_detach = svc_sock_detach,
>> .xpt_free = svc_sock_free,
>> + .xpt_prep_reply_hdr = svc_tcp_prep_reply_hdr,
>> };
>>
>> static void
>>
>> -------------------------------------------------------------------------
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems? Stop.
>> Now Search log events and configuration files using AJAX and a browser.
>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>> _______________________________________________
>> NFS maillist - [email protected]
>> https://lists.sourceforge.net/lists/listinfo/nfs
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On 8/29/07 12:40 PM, "Chuck Lever" <[email protected]> wrote:
> Tom Tucker wrote:
>> Store the max payload supported by the transport in the switch
>> instead of reaching into the socket since not all transports
>> (RDMA) have a socket.
>>
>> Signed-off-by: Greg Banks <[email protected]>
>> Signed-off-by: Peter Leckie <[email protected]>
>> Signed-off-by: Tom Tucker <[email protected]>
>> ---
>>
>> include/linux/sunrpc/svcsock.h | 6 ++++++
>> net/sunrpc/svc.c | 5 ++---
>> net/sunrpc/svcsock.c | 2 ++
>> 3 files changed, 10 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
>> index 3faa95c..4e24e6d 100644
>> --- a/include/linux/sunrpc/svcsock.h
>> +++ b/include/linux/sunrpc/svcsock.h
>> @@ -35,6 +35,12 @@ struct svc_xprt {
>> * Return 1 if sufficient space to write reply to network.
>> */
>> int (*xpt_has_wspace)(struct svc_sock *);
>> + /*
>> + * Stores the largest payload (i.e. READ, WRITE or READDIR
>> + * data length not including NFS headers) supported by the
>> + * svc_sock.
>> + */
>> + u32 xpt_max_payload;
>> };
>
> Only worth 2 cents, but perhaps you could make a separate section within
> svc_xprt for these variables, rather than mixing them with the xpt
> methods. I think separating these is more traditional Linux style.
>
Agreed. I'll change it.
> I notice that the type of the client's rpc_xprt->max_payload is size_t,
> which may be the wrong type -- u32 might be better.
>
Not sure. I simply defined it to match the svc_max_payload function
signature.
>> /*
>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> index 72a900f..41f9ee0 100644
>> --- a/net/sunrpc/svc.c
>> +++ b/net/sunrpc/svc.c
>> @@ -1036,10 +1036,9 @@ err_bad:
>> */
>> u32 svc_max_payload(const struct svc_rqst *rqstp)
>> {
>> - int max = RPCSVC_MAXPAYLOAD_TCP;
>> + struct svc_sock *svsk = rqstp->rq_sock;
>> + int max = svsk->sk_xprt->xpt_max_payload;
>>
>> - if (rqstp->rq_sock->sk_sock->type == SOCK_DGRAM)
>> - max = RPCSVC_MAXPAYLOAD_UDP;
>> if (rqstp->rq_server->sv_max_payload < max)
>> max = rqstp->rq_server->sv_max_payload;
>> return max;
>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>> index b16dad4..0dc94a8 100644
>> --- a/net/sunrpc/svcsock.c
>> +++ b/net/sunrpc/svcsock.c
>> @@ -897,6 +897,7 @@ static const struct svc_xprt svc_udp_xpr
>> .xpt_detach = svc_sock_detach,
>> .xpt_free = svc_sock_free,
>> .xpt_has_wspace = svc_udp_has_wspace,
>> + .xpt_max_payload = RPCSVC_MAXPAYLOAD_UDP,
>> };
>>
>> static void
>> @@ -1368,6 +1369,7 @@ static const struct svc_xprt svc_tcp_xpr
>> .xpt_free = svc_sock_free,
>> .xpt_prep_reply_hdr = svc_tcp_prep_reply_hdr,
>> .xpt_has_wspace = svc_tcp_has_wspace,
>> + .xpt_max_payload = RPCSVC_MAXPAYLOAD_TCP,
>> };
>>
>> static void
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, 2007-08-29 at 15:29 -0400, Chuck Lever wrote:
> Tom Tucker wrote:
> > The RDMA transport includes an ONCRDMA header that precedes the RPC
> > message. This header needs to be saved in addition to the RPC message
> > itself. The RPC transport uses page swapping to implement copy avoidance.
> ^^^
Oops. I'll fix that comment if this function survives the review.
> I assume you mean the "RDMA" transport here.
>
Yes
> Not having looked closely at the RDMA transport, it may be naive to ask
> if any of the RDMA page swapping implementation would be useful for the
> socket transport implementation?
>
The only one I can think of is for NFSv4 on RDMA w/ sessions. Today, the
defer implementation drops anything but simple header-only (no pagelist)
RPC. Concern had been expressed about the negative performance
implications for NFSv4 with sessions and whether this could trigger all
kinds of hideous replay scenarios. I honestly don't know and I also
don't know all the conditions under which an RPC can be deferred.
I do know that this made writing my transport driver easier because it
gave me a way to squirrel away data that was otherwise hidden from the
svc core code. Trond had mentioned adding a 'reserve' field that could
be used by any transport for this purpose. The function clearly gives us
more flexibility, but I think the jury is still out on whether or not
the flexibility is needed. All that said, I don't see it as a huge issue
one way or the other.
> > These transport dependencies are hidden in the xpt_defer routine allowing
> > the bulk of the deferral processing to remain in transport independent
> > code.
>
> You may have two separate patches here: one that exports svc_revisit and
> one that adds an xpt_defer method.
>
> > Signed-off-by: Tom Tucker <[email protected]>
> > ---
> >
> > include/linux/sunrpc/svcsock.h | 5 +++++
> > net/sunrpc/svcsock.c | 5 +++--
> > 2 files changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> > index a920e9b..145c82b 100644
> > --- a/include/linux/sunrpc/svcsock.h
> > +++ b/include/linux/sunrpc/svcsock.h
> > @@ -51,6 +51,10 @@ struct svc_xprt {
> > * Accept a pending connection, for connection-oriented transports
> > */
> > int (*xpt_accept)(struct svc_sock *svsk);
> > +
> > + /* RPC defer routine. */
> > + struct cache_deferred_req *(*xpt_defer)(struct cache_req *req);
> > +
> > /* Transport list link */
> > struct list_head xpt_list;
> > };
> > @@ -138,6 +142,7 @@ void svc_sock_add_connection(struct svc
> > void svc_sock_add_listener(struct svc_sock *);
> > /* Add an initialised connectionless svc_sock to the server */
> > void svc_sock_add_connectionless(struct svc_sock *);
> > +void svc_revisit(struct cache_deferred_req *dreq, int too_many);
> >
> > /*
> > * svc_makesock socket characteristics
> > diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> > index 03ce7e9..b89c577 100644
> > --- a/net/sunrpc/svcsock.c
> > +++ b/net/sunrpc/svcsock.c
> > @@ -1651,7 +1651,7 @@ svc_recv(struct svc_rqst *rqstp, long ti
> > clear_bit(SK_OLD, &svsk->sk_flags);
> >
> > rqstp->rq_secure = svc_port_is_privileged(svc_addr(rqstp));
> > - rqstp->rq_chandle.defer = svc_defer;
> > + rqstp->rq_chandle.defer = svsk->sk_xprt->xpt_defer;
>
> Where is xpt_defer set? Are we calling a NULL pointer here?
>
Merge error. Good catch. The RDMA transport does it right.
> > if (serv->sv_stats)
> > serv->sv_stats->netcnt++;
> > @@ -2116,7 +2116,7 @@ EXPORT_SYMBOL_GPL(svc_create_svcsock);
> > * Handle defer and revisit of requests
> > */
> >
> > -static void svc_revisit(struct cache_deferred_req *dreq, int too_many)
> > +void svc_revisit(struct cache_deferred_req *dreq, int too_many)
> > {
> > struct svc_deferred_req *dr = container_of(dreq, struct svc_deferred_req, handle);
> > struct svc_sock *svsk;
> > @@ -2136,6 +2136,7 @@ static void svc_revisit(struct cache_def
> > svc_sock_enqueue(svsk);
> > svc_sock_put(svsk);
> > }
> > +EXPORT_SYMBOL_GPL(svc_revisit);
> >
> > static struct cache_deferred_req *
> > svc_defer(struct cache_req *req)
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, 2007-08-29 at 15:21 -0400, Chuck Lever wrote:
> Tom Tucker wrote:
> > Add transport function that makes the creation of a listening endpoint
> > transport independent.
>
> I'm beginning to think svc_makesock should be renamed. :-)
>
> I'm not sure about the function naming here. I can't easily distinguish
> between the generic and socket-specific creating functions.
>
I _promise_ to clean the names up in the next patch ;-) But this one has
a special issue because it's called all over the place. That's why I
added the new API. The idea was we could go in and incrementally patch
other svc services as needed, but otherwise make the svc internals
changes opaque to svc clients.
> > Signed-off-by: Tom Tucker <[email protected]>
> > ---
> >
> > include/linux/sunrpc/svcsock.h | 5 +++
> > net/sunrpc/svcsock.c | 65 +++++++++++++++++++++++++++++++++++++---
> > 2 files changed, 65 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> > index cc911ab..e2d0256 100644
> > --- a/include/linux/sunrpc/svcsock.h
> > +++ b/include/linux/sunrpc/svcsock.h
> > @@ -14,6 +14,10 @@ #include <linux/sunrpc/svc.h>
> > struct svc_xprt {
> > const char *xpt_name;
> > struct module *xpt_owner;
> > + /* Create an svc socket for this transport */
> > + int (*xpt_create_svc)(struct svc_serv *,
> > + struct sockaddr *,
> > + int);
> > int (*xpt_recvfrom)(struct svc_rqst *rqstp);
> > int (*xpt_sendto)(struct svc_rqst *rqstp);
> > /*
> > @@ -109,6 +113,7 @@ #define SK_LISTENER 11 /* listener (e.
> > int svc_register_transport(struct svc_xprt *xprt);
> > int svc_unregister_transport(struct svc_xprt *xprt);
> > int svc_makesock(struct svc_serv *, int, unsigned short, int flags);
> > +int svc_create_svcsock(struct svc_serv *, char *, unsigned short, int);
> > void svc_force_close_socket(struct svc_sock *);
> > int svc_recv(struct svc_rqst *, long);
> > int svc_send(struct svc_rqst *);
> > diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> > index 276737e..44d6484 100644
> > --- a/net/sunrpc/svcsock.c
> > +++ b/net/sunrpc/svcsock.c
> > @@ -87,6 +87,8 @@ static void svc_close_socket(struct svc
> > static void svc_sock_detach(struct svc_sock *);
> > static void svc_sock_free(struct svc_sock *);
> >
> > +static int
> > +svc_create_socket(struct svc_serv *, int, struct sockaddr *, int, int);
> > static struct svc_deferred_req *svc_deferred_dequeue(struct svc_sock *svsk);
> > static int svc_deferred_recv(struct svc_rqst *rqstp);
> > static struct cache_deferred_req *svc_defer(struct cache_req *req);
> > @@ -434,6 +436,7 @@ __svc_sock_put(struct svc_sock *svsk)
> >
> > if (svsk->sk_info_authunix != NULL)
> > svcauth_unix_info_release(svsk->sk_info_authunix);
> > + module_put(svsk->sk_xprt->xpt_owner);
> > svsk->sk_xprt->xpt_free(svsk);
> > }
> > EXPORT_SYMBOL_GPL(__svc_sock_put);
> > @@ -961,9 +964,17 @@ svc_udp_has_wspace(struct svc_sock *svsk
> > return svc_sock_has_write_space(svsk, sock_wspace(svsk->sk_sk));
> > }
> >
> > +static int
> > +svc_udp_create_svc(struct svc_serv *serv, struct sockaddr *sa, int flags)
> > +{
> > + return svc_create_socket(serv, IPPROTO_UDP, sa,
> > + sizeof(struct sockaddr_in), flags);
> > +}
> > +
> > static struct svc_xprt svc_udp_xprt = {
> > .xpt_name = "udp",
> > .xpt_owner = THIS_MODULE,
> > + .xpt_create_svc = svc_udp_create_svc,
> > .xpt_recvfrom = svc_udp_recvfrom,
> > .xpt_sendto = svc_udp_sendto,
> > .xpt_detach = svc_sock_detach,
> > @@ -1421,9 +1432,17 @@ svc_tcp_has_wspace(struct svc_sock *svsk
> > return svc_sock_has_write_space(svsk, sk_stream_wspace(svsk->sk_sk));
> > }
> >
> > +static int
> > +svc_tcp_create_svc(struct svc_serv *serv, struct sockaddr *sa, int flags)
> > +{
> > + return svc_create_socket(serv, IPPROTO_TCP, sa,
> > + sizeof(struct sockaddr_in), flags);
> > +}
> > +
> > static struct svc_xprt svc_tcp_xprt = {
> > .xpt_name = "tcp",
> > .xpt_owner = THIS_MODULE,
> > + .xpt_create_svc = svc_tcp_create_svc,
> > .xpt_recvfrom = svc_tcp_recvfrom,
> > .xpt_sendto = svc_tcp_sendto,
> > .xpt_detach = svc_sock_detach,
> > @@ -1606,6 +1625,7 @@ svc_recv(struct svc_rqst *rqstp, long ti
> > svc_delete_socket(svsk);
> > } else if (test_bit(SK_LISTENER, &svsk->sk_flags)) {
> > svsk->sk_xprt->xpt_accept(svsk);
> > + __module_get(svsk->sk_xprt->xpt_owner);
> > svc_sock_received(svsk);
> > } else {
> > dprintk("svc: server %p, pool %u, socket %p, inuse=%d\n",
> > @@ -1885,7 +1905,7 @@ EXPORT_SYMBOL_GPL(svc_addsock);
> > * Create socket for RPC service.
> > */
> > static int svc_create_socket(struct svc_serv *serv, int protocol,
> > - struct sockaddr *sin, int len, int flags)
> > + struct sockaddr *sin, int len, int flags)
> > {
> > struct svc_sock *svsk;
> > struct socket *sock;
> > @@ -2037,18 +2057,53 @@ void svc_force_close_socket(struct svc_s
> > *
> > */
> > int svc_makesock(struct svc_serv *serv, int protocol, unsigned short port,
> > - int flags)
> > + int flags)
> > {
> > + dprintk("svc: creating socket proto = %d\n", protocol);
> > + switch (protocol) {
> > + case IPPROTO_TCP:
> > + return svc_create_svcsock(serv, "tcp", port, flags);
> > + case IPPROTO_UDP:
> > + return svc_create_svcsock(serv, "udp", port, flags);
> > + default:
> > + return -EINVAL;
> > + }
> > +}
> > +
> > +int svc_create_svcsock(struct svc_serv *serv, char *transport, unsigned short port,
> > + int flags)
> > +{
> > + int ret = -ENOENT;
> > + struct list_head *le;
> > struct sockaddr_in sin = {
> > .sin_family = AF_INET,
> > .sin_addr.s_addr = INADDR_ANY,
> > .sin_port = htons(port),
> > };
> > + dprintk("svc: creating transport socket %s[%d]\n", transport, port);
> > + spin_lock(&svc_transport_lock);
> > + list_for_each(le, &svc_transport_list) {
> > + struct svc_xprt *xprt =
> > + list_entry(le, struct svc_xprt, xpt_list);
> >
> > - dprintk("svc: creating socket proto = %d\n", protocol);
> > - return svc_create_socket(serv, protocol, (struct sockaddr *) &sin,
> > - sizeof(sin), flags);
> > + if (strcmp(transport, xprt->xpt_name)==0) {
>
> nit: I like "xpt_name) == 0)" instead.
>
ah, took me a second to get this one ... white space ... ok
> > + spin_unlock(&svc_transport_lock);
> > + if (try_module_get(xprt->xpt_owner)) {
> > + ret = xprt->xpt_create_svc(serv,
> > + (struct sockaddr*)&sin,
> > + flags);
> > + if (ret < 0)
> > + module_put(xprt->xpt_owner);
> > + goto out;
> > + }
> > + }
> > + }
> > + spin_unlock(&svc_transport_lock);
> > + dprintk("svc: transport %s not found\n", transport);
> > + out:
> > + return ret;
> > }
> > +EXPORT_SYMBOL_GPL(svc_create_svcsock);
> >
> > /*
> > * Handle defer and revisit of requests
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Good point.
Actually, there are a number of interfaces like this. I guess we could
move the generic transport interfaces to svc_xprt.c. Then svcsock.c
would contain the tcp/udp transport driver.
On Tue, 2007-08-21 at 12:03 -0400, Chuck Lever wrote:
> Tom Tucker wrote:
> > Export svc_sock_enqueue() and svc_sock_received() so they
> > can be used by sunrpc server transport implementations
> > (even future modular ones).
> >
> > Signed-off-by: Greg Banks <[email protected]>
> > Signed-off-by: Peter Leckie <[email protected]>
> > Signed-off-by: Tom Tucker <[email protected]>
> > ---
> >
> > include/linux/sunrpc/svcsock.h | 2 ++
> > net/sunrpc/svcsock.c | 7 ++++---
> > 2 files changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
> > index 4e24e6d..0145057 100644
> > --- a/include/linux/sunrpc/svcsock.h
> > +++ b/include/linux/sunrpc/svcsock.h
> > @@ -108,6 +108,8 @@ int svc_addsock(struct svc_serv *serv,
> > int fd,
> > char *name_return,
> > int *proto);
> > +void svc_sock_enqueue(struct svc_sock *svsk);
> > +void svc_sock_received(struct svc_sock *svsk);
>
> If these aren't socket-specific, then I would rename them, and move them
> to a generic source file instead of keeping them in svcsock.c and svcsock.h.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Start moving to a transport switch for knfsd. Add a svc_xprt
switch and move the sk_sendto and sk_recvfrom function
pointers into it.
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Peter Leckie <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 9 +++++++--
net/sunrpc/svcsock.c | 22 ++++++++++++++++------
2 files changed, 23 insertions(+), 8 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index e21dd93..4792ed6 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -11,6 +11,12 @@ #define SUNRPC_SVCSOCK_H
#include <linux/sunrpc/svc.h>
+struct svc_xprt {
+ const char *xpt_name;
+ int (*xpt_recvfrom)(struct svc_rqst *rqstp);
+ int (*xpt_sendto)(struct svc_rqst *rqstp);
+};
+
/*
* RPC server socket.
*/
@@ -43,8 +49,7 @@ #define SK_DETACHED 10 /* detached fro
* be revisted */
struct mutex sk_mutex; /* to serialize sending data */
- int (*sk_recvfrom)(struct svc_rqst *rqstp);
- int (*sk_sendto)(struct svc_rqst *rqstp);
+ const struct svc_xprt *sk_xprt;
/* We keep the old state_change and data_ready CB's here */
void (*sk_ostate)(struct sock *);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 5baf48d..789d94a 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -885,6 +885,12 @@ svc_udp_sendto(struct svc_rqst *rqstp)
return error;
}
+static const struct svc_xprt svc_udp_xprt = {
+ .xpt_name = "udp",
+ .xpt_recvfrom = svc_udp_recvfrom,
+ .xpt_sendto = svc_udp_sendto,
+};
+
static void
svc_udp_init(struct svc_sock *svsk)
{
@@ -893,8 +899,7 @@ svc_udp_init(struct svc_sock *svsk)
svsk->sk_sk->sk_data_ready = svc_udp_data_ready;
svsk->sk_sk->sk_write_space = svc_write_space;
- svsk->sk_recvfrom = svc_udp_recvfrom;
- svsk->sk_sendto = svc_udp_sendto;
+ svsk->sk_xprt = &svc_udp_xprt;
/* initialise setting must have enough space to
* receive and respond to one request.
@@ -1322,14 +1327,19 @@ svc_tcp_sendto(struct svc_rqst *rqstp)
return sent;
}
+static const struct svc_xprt svc_tcp_xprt = {
+ .xpt_name = "tcp",
+ .xpt_recvfrom = svc_tcp_recvfrom,
+ .xpt_sendto = svc_tcp_sendto,
+};
+
static void
svc_tcp_init(struct svc_sock *svsk)
{
struct sock *sk = svsk->sk_sk;
struct tcp_sock *tp = tcp_sk(sk);
- svsk->sk_recvfrom = svc_tcp_recvfrom;
- svsk->sk_sendto = svc_tcp_sendto;
+ svsk->sk_xprt = &svc_tcp_xprt;
if (sk->sk_state == TCP_LISTEN) {
dprintk("setting up TCP socket for listening\n");
@@ -1477,7 +1487,7 @@ svc_recv(struct svc_rqst *rqstp, long ti
dprintk("svc: server %p, pool %u, socket %p, inuse=%d\n",
rqstp, pool->sp_id, svsk, atomic_read(&svsk->sk_inuse));
- len = svsk->sk_recvfrom(rqstp);
+ len = svsk->sk_xprt->xpt_recvfrom(rqstp);
dprintk("svc: got len=%d\n", len);
/* No data, incomplete (TCP) read, or accept() */
@@ -1537,7 +1547,7 @@ svc_send(struct svc_rqst *rqstp)
if (test_bit(SK_DEAD, &svsk->sk_flags))
len = -ENOTCONN;
else
- len = svsk->sk_sendto(rqstp);
+ len = svsk->sk_xprt->xpt_sendto(rqstp);
mutex_unlock(&svsk->sk_mutex);
svc_sock_release(rqstp);
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add transport switch functions to ensure that no additional receive
ready events will be delivered by the transport (xpt_detach),
and another to free memory associated with the transport (xpt_free).
Change svc_delete_socket() and svc_sock_put() to use the new
transport functions.
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Peter Leckie <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 12 ++++++++++
net/sunrpc/svcsock.c | 50 +++++++++++++++++++++++++++++++++-------
2 files changed, 53 insertions(+), 9 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 4792ed6..27c5b1f 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -15,6 +15,18 @@ struct svc_xprt {
const char *xpt_name;
int (*xpt_recvfrom)(struct svc_rqst *rqstp);
int (*xpt_sendto)(struct svc_rqst *rqstp);
+ /*
+ * Detach the svc_sock from it's socket, so that the
+ * svc_sock will not be enqueued any more. This is
+ * the first stage in the destruction of a svc_sock.
+ */
+ void (*xpt_detach)(struct svc_sock *);
+ /*
+ * Release all network-level resources held by the svc_sock,
+ * and the svc_sock itself. This is the final stage in the
+ * destruction of a svc_sock.
+ */
+ void (*xpt_free)(struct svc_sock *);
};
/*
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 789d94a..4956c88 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -84,6 +84,8 @@ static void svc_udp_data_ready(struct s
static int svc_udp_recvfrom(struct svc_rqst *);
static int svc_udp_sendto(struct svc_rqst *);
static void svc_close_socket(struct svc_sock *svsk);
+static void svc_sock_detach(struct svc_sock *);
+static void svc_sock_free(struct svc_sock *);
static struct svc_deferred_req *svc_deferred_dequeue(struct svc_sock *svsk);
static int svc_deferred_recv(struct svc_rqst *rqstp);
@@ -378,14 +380,9 @@ svc_sock_put(struct svc_sock *svsk)
if (atomic_dec_and_test(&svsk->sk_inuse)) {
BUG_ON(! test_bit(SK_DEAD, &svsk->sk_flags));
- dprintk("svc: releasing dead socket\n");
- if (svsk->sk_sock->file)
- sockfd_put(svsk->sk_sock);
- else
- sock_release(svsk->sk_sock);
if (svsk->sk_info_authunix != NULL)
svcauth_unix_info_release(svsk->sk_info_authunix);
- kfree(svsk);
+ svsk->sk_xprt->xpt_free(svsk);
}
}
@@ -889,6 +886,8 @@ static const struct svc_xprt svc_udp_xpr
.xpt_name = "udp",
.xpt_recvfrom = svc_udp_recvfrom,
.xpt_sendto = svc_udp_sendto,
+ .xpt_detach = svc_sock_detach,
+ .xpt_free = svc_sock_free,
};
static void
@@ -1331,6 +1330,8 @@ static const struct svc_xprt svc_tcp_xpr
.xpt_name = "tcp",
.xpt_recvfrom = svc_tcp_recvfrom,
.xpt_sendto = svc_tcp_sendto,
+ .xpt_detach = svc_sock_detach,
+ .xpt_free = svc_sock_free,
};
static void
@@ -1770,6 +1771,38 @@ bummer:
}
/*
+ * Detach the svc_sock from the socket so that no
+ * more callbacks occur.
+ */
+static void
+svc_sock_detach(struct svc_sock *svsk)
+{
+ struct sock *sk = svsk->sk_sk;
+
+ dprintk("svc: svc_sock_detach(%p)\n", svsk);
+
+ /* put back the old socket callbacks */
+ sk->sk_state_change = svsk->sk_ostate;
+ sk->sk_data_ready = svsk->sk_odata;
+ sk->sk_write_space = svsk->sk_owspace;
+}
+
+/*
+ * Free the svc_sock's socket resources and the svc_sock itself.
+ */
+static void
+svc_sock_free(struct svc_sock *svsk)
+{
+ dprintk("svc: svc_sock_free(%p)\n", svsk);
+
+ if (svsk->sk_sock->file)
+ sockfd_put(svsk->sk_sock);
+ else
+ sock_release(svsk->sk_sock);
+ kfree(svsk);
+}
+
+/*
* Remove a dead socket
*/
static void
@@ -1783,9 +1816,8 @@ svc_delete_socket(struct svc_sock *svsk)
serv = svsk->sk_server;
sk = svsk->sk_sk;
- sk->sk_state_change = svsk->sk_ostate;
- sk->sk_data_ready = svsk->sk_odata;
- sk->sk_write_space = svsk->sk_owspace;
+ if (svsk->sk_xprt->xpt_detach)
+ svsk->sk_xprt->xpt_detach(svsk);
spin_lock_bh(&serv->sv_lock);
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Store the max payload supported by the transport in the switch
instead of reaching into the socket since not all transports
(RDMA) have a socket.
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Peter Leckie <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 6 ++++++
net/sunrpc/svc.c | 5 ++---
net/sunrpc/svcsock.c | 2 ++
3 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 3faa95c..4e24e6d 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -35,6 +35,12 @@ struct svc_xprt {
* Return 1 if sufficient space to write reply to network.
*/
int (*xpt_has_wspace)(struct svc_sock *);
+ /*
+ * Stores the largest payload (i.e. READ, WRITE or READDIR
+ * data length not including NFS headers) supported by the
+ * svc_sock.
+ */
+ u32 xpt_max_payload;
};
/*
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 72a900f..41f9ee0 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -1036,10 +1036,9 @@ err_bad:
*/
u32 svc_max_payload(const struct svc_rqst *rqstp)
{
- int max = RPCSVC_MAXPAYLOAD_TCP;
+ struct svc_sock *svsk = rqstp->rq_sock;
+ int max = svsk->sk_xprt->xpt_max_payload;
- if (rqstp->rq_sock->sk_sock->type == SOCK_DGRAM)
- max = RPCSVC_MAXPAYLOAD_UDP;
if (rqstp->rq_server->sv_max_payload < max)
max = rqstp->rq_server->sv_max_payload;
return max;
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index b16dad4..0dc94a8 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -897,6 +897,7 @@ static const struct svc_xprt svc_udp_xpr
.xpt_detach = svc_sock_detach,
.xpt_free = svc_sock_free,
.xpt_has_wspace = svc_udp_has_wspace,
+ .xpt_max_payload = RPCSVC_MAXPAYLOAD_UDP,
};
static void
@@ -1368,6 +1369,7 @@ static const struct svc_xprt svc_tcp_xpr
.xpt_free = svc_sock_free,
.xpt_prep_reply_hdr = svc_tcp_prep_reply_hdr,
.xpt_has_wspace = svc_tcp_has_wspace,
+ .xpt_max_payload = RPCSVC_MAXPAYLOAD_TCP,
};
static void
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add a transport function that prepares the transport specific header for
RPC replies. UDP has none, TCP has a 4B record length. This will
allow the RDMA transport to prepare it's variable length reply
header as well.
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 4 ++++
net/sunrpc/svc.c | 8 +++++---
net/sunrpc/svcsock.c | 15 +++++++++++++++
3 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 27c5b1f..1da42c2 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -27,6 +27,10 @@ struct svc_xprt {
* destruction of a svc_sock.
*/
void (*xpt_free)(struct svc_sock *);
+ /*
+ * Prepare any transport-specific RPC header.
+ */
+ int (*xpt_prep_reply_hdr)(struct svc_rqst *);
};
/*
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index e673ef9..72a900f 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -815,9 +815,11 @@ svc_process(struct svc_rqst *rqstp)
rqstp->rq_res.tail[0].iov_len = 0;
/* Will be turned off only in gss privacy case: */
rqstp->rq_sendfile_ok = 1;
- /* tcp needs a space for the record length... */
- if (rqstp->rq_prot == IPPROTO_TCP)
- svc_putnl(resv, 0);
+
+ /* setup response header. */
+ if (rqstp->rq_sock->sk_xprt->xpt_prep_reply_hdr &&
+ rqstp->rq_sock->sk_xprt->xpt_prep_reply_hdr(rqstp))
+ goto dropit;
rqstp->rq_xid = svc_getu32(argv);
svc_putu32(resv, rqstp->rq_xid);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 4956c88..ca473ee 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1326,12 +1326,27 @@ svc_tcp_sendto(struct svc_rqst *rqstp)
return sent;
}
+/*
+ * Setup response header. TCP has a 4B record length field.
+ */
+static int
+svc_tcp_prep_reply_hdr(struct svc_rqst *rqstp)
+{
+ struct kvec *resv = &rqstp->rq_res.head[0];
+
+ /* tcp needs a space for the record length... */
+ svc_putnl(resv, 0);
+
+ return 0;
+}
+
static const struct svc_xprt svc_tcp_xprt = {
.xpt_name = "tcp",
.xpt_recvfrom = svc_tcp_recvfrom,
.xpt_sendto = svc_tcp_sendto,
.xpt_detach = svc_sock_detach,
.xpt_free = svc_sock_free,
+ .xpt_prep_reply_hdr = svc_tcp_prep_reply_hdr,
};
static void
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add inline svc_sock_get() so that service transport code will not
need to manipulate sk_inuse directly. Also, make svc_sock_put()
available so that transport code outside svcsock.c can use it.
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 15 +++++++++++++++
net/sunrpc/svcsock.c | 29 ++++++++++++++---------------
2 files changed, 29 insertions(+), 15 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index ea8b62b..9f37f30 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -115,6 +115,7 @@ int svc_addsock(struct svc_serv *serv,
int *proto);
void svc_sock_enqueue(struct svc_sock *svsk);
void svc_sock_received(struct svc_sock *svsk);
+void __svc_sock_put(struct svc_sock *svsk);
/*
* svc_makesock socket characteristics
@@ -123,4 +124,18 @@ #define SVC_SOCK_DEFAULTS (0U)
#define SVC_SOCK_ANONYMOUS (1U << 0) /* don't register with pmap */
#define SVC_SOCK_TEMPORARY (1U << 1) /* flag socket as temporary */
+/*
+ * Take and drop a temporary reference count on the svc_sock.
+ */
+static inline void svc_sock_get(struct svc_sock *svsk)
+{
+ atomic_inc(&svsk->sk_inuse);
+}
+
+static inline void svc_sock_put(struct svc_sock *svsk)
+{
+ if (atomic_dec_and_test(&svsk->sk_inuse))
+ __svc_sock_put(svsk);
+}
+
#endif /* SUNRPC_SVCSOCK_H */
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index dcb5c7a..02f682a 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -273,7 +273,7 @@ svc_sock_enqueue(struct svc_sock *svsk)
"svc_sock_enqueue: server %p, rq_sock=%p!\n",
rqstp, rqstp->rq_sock);
rqstp->rq_sock = svsk;
- atomic_inc(&svsk->sk_inuse);
+ svc_sock_get(svsk);
rqstp->rq_reserved = serv->sv_max_mesg;
atomic_add(rqstp->rq_reserved, &svsk->sk_reserved);
BUG_ON(svsk->sk_pool != pool);
@@ -351,17 +351,16 @@ void svc_reserve(struct svc_rqst *rqstp,
/*
* Release a socket after use.
*/
-static inline void
-svc_sock_put(struct svc_sock *svsk)
+void
+__svc_sock_put(struct svc_sock *svsk)
{
- if (atomic_dec_and_test(&svsk->sk_inuse)) {
- BUG_ON(! test_bit(SK_DEAD, &svsk->sk_flags));
+ BUG_ON(! test_bit(SK_DEAD, &svsk->sk_flags));
- if (svsk->sk_info_authunix != NULL)
- svcauth_unix_info_release(svsk->sk_info_authunix);
- svsk->sk_xprt->xpt_free(svsk);
- }
+ if (svsk->sk_info_authunix != NULL)
+ svcauth_unix_info_release(svsk->sk_info_authunix);
+ svsk->sk_xprt->xpt_free(svsk);
}
+EXPORT_SYMBOL_GPL(__svc_sock_put);
static void
svc_sock_release(struct svc_rqst *rqstp)
@@ -1109,7 +1108,7 @@ svc_tcp_accept(struct svc_sock *svsk)
struct svc_sock,
sk_list);
set_bit(SK_CLOSE, &svsk->sk_flags);
- atomic_inc(&svsk->sk_inuse);
+ svc_sock_get(svsk);
}
spin_unlock_bh(&serv->sv_lock);
@@ -1481,7 +1480,7 @@ svc_recv(struct svc_rqst *rqstp, long ti
spin_lock_bh(&pool->sp_lock);
if ((svsk = svc_sock_dequeue(pool)) != NULL) {
rqstp->rq_sock = svsk;
- atomic_inc(&svsk->sk_inuse);
+ svc_sock_get(svsk);
rqstp->rq_reserved = serv->sv_max_mesg;
atomic_add(rqstp->rq_reserved, &svsk->sk_reserved);
} else {
@@ -1620,7 +1619,7 @@ svc_age_temp_sockets(unsigned long closu
continue;
if (atomic_read(&svsk->sk_inuse) || test_bit(SK_BUSY, &svsk->sk_flags))
continue;
- atomic_inc(&svsk->sk_inuse);
+ svc_sock_get(svsk);
list_move(le, &to_be_aged);
set_bit(SK_CLOSE, &svsk->sk_flags);
set_bit(SK_DETACHED, &svsk->sk_flags);
@@ -1868,7 +1867,7 @@ svc_delete_socket(struct svc_sock *svsk)
*/
if (!test_and_set_bit(SK_DEAD, &svsk->sk_flags)) {
BUG_ON(atomic_read(&svsk->sk_inuse)<2);
- atomic_dec(&svsk->sk_inuse);
+ svc_sock_put(svsk);
if (test_bit(SK_TEMP, &svsk->sk_flags))
serv->sv_tmpcnt--;
}
@@ -1883,7 +1882,7 @@ static void svc_close_socket(struct svc_
/* someone else will have to effect the close */
return;
- atomic_inc(&svsk->sk_inuse);
+ svc_sock_get(svsk);
svc_delete_socket(svsk);
clear_bit(SK_BUSY, &svsk->sk_flags);
svc_sock_put(svsk);
@@ -1976,7 +1975,7 @@ svc_defer(struct cache_req *req)
dr->argslen = rqstp->rq_arg.len >> 2;
memcpy(dr->args, rqstp->rq_arg.head[0].iov_base-skip, dr->argslen<<2);
}
- atomic_inc(&rqstp->rq_sock->sk_inuse);
+ svc_sock_get(rqstp->rq_sock);
dr->svsk = rqstp->rq_sock;
dr->handle.revisit = svc_revisit;
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Centralise the handling of the SK_CONN bit so that future
server transport implementations will be easier to
write correctly. Also, the xpt_recvfrom method does not
need to check for SK_CONN anymore, that's handled in core
code which calls a new xpt_accept method.
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Peter Leckie <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 4 ++++
net/sunrpc/svcsock.c | 24 +++++++++++-------------
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 0145057..7663578 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -41,6 +41,10 @@ struct svc_xprt {
* svc_sock.
*/
u32 xpt_max_payload;
+ /*
+ * Accept a pending connection, for connection-oriented transports
+ */
+ int (*xpt_accept)(struct svc_sock *svsk);
};
/*
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 5c3a794..94eb921 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1012,7 +1012,7 @@ static inline int svc_port_is_privileged
/*
* Accept a TCP connection
*/
-static void
+static int
svc_tcp_accept(struct svc_sock *svsk)
{
struct sockaddr_storage addr;
@@ -1021,12 +1021,12 @@ svc_tcp_accept(struct svc_sock *svsk)
struct socket *sock = svsk->sk_sock;
struct socket *newsock;
struct svc_sock *newsvsk;
- int err, slen;
+ int err = 0, slen;
char buf[RPC_MAX_ADDRBUFLEN];
dprintk("svc: tcp_accept %p sock %p\n", svsk, sock);
if (!sock)
- return;
+ return -EINVAL;
clear_bit(SK_CONN, &svsk->sk_flags);
err = kernel_accept(sock, &newsock, O_NONBLOCK);
@@ -1037,9 +1037,8 @@ svc_tcp_accept(struct svc_sock *svsk)
else if (err != -EAGAIN && net_ratelimit())
printk(KERN_WARNING "%s: accept failed (err %d)!\n",
serv->sv_name, -err);
- return;
+ return err;
}
-
set_bit(SK_CONN, &svsk->sk_flags);
svc_sock_enqueue(svsk);
@@ -1124,11 +1123,11 @@ svc_tcp_accept(struct svc_sock *svsk)
if (serv->sv_stats)
serv->sv_stats->nettcpconn++;
- return;
+ return 0;
failed:
sock_release(newsock);
- return;
+ return err;
}
/*
@@ -1153,12 +1152,6 @@ svc_tcp_recvfrom(struct svc_rqst *rqstp)
return svc_deferred_recv(rqstp);
}
- if (svsk->sk_sk->sk_state == TCP_LISTEN) {
- svc_tcp_accept(svsk);
- svc_sock_received(svsk);
- return 0;
- }
-
if (test_and_clear_bit(SK_CHNGBUF, &svsk->sk_flags))
/* sndbuf needs to have room for one request
* per thread, otherwise we can stall even when the
@@ -1361,6 +1354,7 @@ static const struct svc_xprt svc_tcp_xpr
.xpt_prep_reply_hdr = svc_tcp_prep_reply_hdr,
.xpt_has_wspace = svc_tcp_has_wspace,
.xpt_max_payload = RPCSVC_MAXPAYLOAD_TCP,
+ .xpt_accept = svc_tcp_accept,
};
static void
@@ -1521,6 +1515,9 @@ svc_recv(struct svc_rqst *rqstp, long ti
if (test_bit(SK_CLOSE, &svsk->sk_flags)) {
dprintk("svc_recv: found SK_CLOSE\n");
svc_delete_socket(svsk);
+ } else if (svsk->sk_sk->sk_state == TCP_LISTEN) {
+ svsk->sk_xprt->xpt_accept(svsk);
+ svc_sock_received(svsk);
} else {
dprintk("svc: server %p, pool %u, socket %p, inuse=%d\n",
rqstp, pool->sp_id, svsk, atomic_read(&svsk->sk_inuse));
@@ -1660,6 +1657,7 @@ static struct svc_sock *svc_setup_socket
int is_temporary = flags & SVC_SOCK_TEMPORARY;
dprintk("svc: svc_setup_socket %p\n", sock);
+ *errp = 0;
if (!(svsk = kzalloc(sizeof(*svsk), GFP_KERNEL))) {
*errp = -ENOMEM;
return NULL;
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Centralise the handling of the SK_CLOSE bit so that future
server transport implementations will be easier to
write correctly. The xpt_recvfrom method does not
need to check for SK_CLOSE anymore, that's handled in
core code.
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Peter Leckie <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
net/sunrpc/svcsock.c | 28 +++++++++++++---------------
1 files changed, 13 insertions(+), 15 deletions(-)
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 8fad53d..5c3a794 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -756,11 +756,6 @@ svc_udp_recvfrom(struct svc_rqst *rqstp)
return svc_deferred_recv(rqstp);
}
- if (test_bit(SK_CLOSE, &svsk->sk_flags)) {
- svc_delete_socket(svsk);
- return 0;
- }
-
clear_bit(SK_DATA, &svsk->sk_flags);
skb = NULL;
err = kernel_recvmsg(svsk->sk_sock, &msg, NULL,
@@ -1158,11 +1153,6 @@ svc_tcp_recvfrom(struct svc_rqst *rqstp)
return svc_deferred_recv(rqstp);
}
- if (test_bit(SK_CLOSE, &svsk->sk_flags)) {
- svc_delete_socket(svsk);
- return 0;
- }
-
if (svsk->sk_sk->sk_state == TCP_LISTEN) {
svc_tcp_accept(svsk);
svc_sock_received(svsk);
@@ -1406,8 +1396,10 @@ svc_tcp_init(struct svc_sock *svsk)
set_bit(SK_CHNGBUF, &svsk->sk_flags);
set_bit(SK_DATA, &svsk->sk_flags);
- if (sk->sk_state != TCP_ESTABLISHED)
+ if (sk->sk_state != TCP_ESTABLISHED) {
+ /* note: caller calls svc_sock_enqueue() */
set_bit(SK_CLOSE, &svsk->sk_flags);
+ }
}
}
@@ -1525,10 +1517,16 @@ svc_recv(struct svc_rqst *rqstp, long ti
}
spin_unlock_bh(&pool->sp_lock);
- dprintk("svc: server %p, pool %u, socket %p, inuse=%d\n",
- rqstp, pool->sp_id, svsk, atomic_read(&svsk->sk_inuse));
- len = svsk->sk_xprt->xpt_recvfrom(rqstp);
- dprintk("svc: got len=%d\n", len);
+ len = 0;
+ if (test_bit(SK_CLOSE, &svsk->sk_flags)) {
+ dprintk("svc_recv: found SK_CLOSE\n");
+ svc_delete_socket(svsk);
+ } else {
+ dprintk("svc: server %p, pool %u, socket %p, inuse=%d\n",
+ rqstp, pool->sp_id, svsk, atomic_read(&svsk->sk_inuse));
+ len = svsk->sk_xprt->xpt_recvfrom(rqstp);
+ dprintk("svc: got len=%d\n", len);
+ }
/* No data, incomplete (TCP) read, or accept() */
if (len == 0 || len == -EAGAIN) {
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Reorganise the svc_sock initialisation code so that new service
transport code can use it without duplicating lots of code
that futzes with internal transport details (for example the
SK_BUSY bit). Transport code should now call svc_sock_init() to
initialise the svc_sock structure, then one of svc_sock_add_listener
sock_add_connection or svc_sock_add_connectionless, and finally
svc_sock_received.
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 10 +++
net/sunrpc/svcsock.c | 143 ++++++++++++++++++++++++++--------------
2 files changed, 103 insertions(+), 50 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 9f37f30..7def951 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -116,6 +116,16 @@ int svc_addsock(struct svc_serv *serv,
void svc_sock_enqueue(struct svc_sock *svsk);
void svc_sock_received(struct svc_sock *svsk);
void __svc_sock_put(struct svc_sock *svsk);
+/* Initialise a newly allocated svc_sock. The transport code needs
+ * to call svc_sock_received() when transport-specific initialisation
+ * is complete and one of the svc_add_*() functions has been called. */
+void svc_sock_init(struct svc_sock *, struct svc_serv *);
+/* Add an initialised connection svc_sock to the server */
+void svc_sock_add_connection(struct svc_sock *);
+/* Add an initialised listener svc_sock to the server */
+void svc_sock_add_listener(struct svc_sock *);
+/* Add an initialised connectionless svc_sock to the server */
+void svc_sock_add_connectionless(struct svc_sock *);
/*
* svc_makesock socket characteristics
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 02f682a..7d219de 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1357,44 +1357,49 @@ static const struct svc_xprt svc_tcp_xpr
};
static void
-svc_tcp_init(struct svc_sock *svsk)
+svc_tcp_init_listener(struct svc_sock *svsk)
+{
+ struct sock *sk = svsk->sk_sk;
+
+ svsk->sk_xprt = &svc_tcp_xprt;
+
+ dprintk("setting up TCP socket for listening\n");
+ sk->sk_data_ready = svc_tcp_listen_data_ready;
+ set_bit(SK_LISTENER, &svsk->sk_flags);
+ set_bit(SK_CONN, &svsk->sk_flags);
+}
+
+static void
+svc_tcp_init_connection(struct svc_sock *svsk)
{
struct sock *sk = svsk->sk_sk;
struct tcp_sock *tp = tcp_sk(sk);
svsk->sk_xprt = &svc_tcp_xprt;
- if (sk->sk_state == TCP_LISTEN) {
- dprintk("setting up TCP socket for listening\n");
- sk->sk_data_ready = svc_tcp_listen_data_ready;
- set_bit(SK_LISTENER, &svsk->sk_flags);
- set_bit(SK_CONN, &svsk->sk_flags);
- } else {
- dprintk("setting up TCP socket for reading\n");
- sk->sk_state_change = svc_tcp_state_change;
- sk->sk_data_ready = svc_tcp_data_ready;
- sk->sk_write_space = svc_write_space;
+ dprintk("setting up TCP socket for reading\n");
+ sk->sk_state_change = svc_tcp_state_change;
+ sk->sk_data_ready = svc_tcp_data_ready;
+ sk->sk_write_space = svc_write_space;
- svsk->sk_reclen = 0;
- svsk->sk_tcplen = 0;
+ svsk->sk_reclen = 0;
+ svsk->sk_tcplen = 0;
- tp->nonagle = 1; /* disable Nagle's algorithm */
+ tp->nonagle = 1; /* disable Nagle's algorithm */
- /* initialise setting must have enough space to
- * receive and respond to one request.
- * svc_tcp_recvfrom will re-adjust if necessary
- */
- svc_sock_setbufsize(svsk->sk_sock,
- 3 * svsk->sk_server->sv_max_mesg,
- 3 * svsk->sk_server->sv_max_mesg);
+ /*
+ * Initialise setting must have enough space to receive and
+ * respond to one request. svc_tcp_recvfrom will re-adjust if
+ * necessary
+ */
+ svc_sock_setbufsize(svsk->sk_sock,
+ 3 * svsk->sk_server->sv_max_mesg,
+ 3 * svsk->sk_server->sv_max_mesg);
- set_bit(SK_CHNGBUF, &svsk->sk_flags);
- set_bit(SK_DATA, &svsk->sk_flags);
- if (sk->sk_state != TCP_ESTABLISHED) {
- /* note: caller calls svc_sock_enqueue() */
- set_bit(SK_CLOSE, &svsk->sk_flags);
- }
- }
+ set_bit(SK_CHNGBUF, &svsk->sk_flags);
+ set_bit(SK_DATA, &svsk->sk_flags);
+ if (sk->sk_state != TCP_ESTABLISHED)
+ set_bit(SK_CLOSE, &svsk->sk_flags);
}
void
@@ -1682,6 +1687,29 @@ static struct svc_sock *svc_setup_socket
svsk->sk_ostate = inet->sk_state_change;
svsk->sk_odata = inet->sk_data_ready;
svsk->sk_owspace = inet->sk_write_space;
+ svc_sock_init(svsk, serv);
+
+ /* Initialize the socket */
+ if (sock->type == SOCK_DGRAM) {
+ svc_udp_init(svsk);
+ svc_sock_add_connectionless(svsk);
+ } else if (inet->sk_state == TCP_LISTEN) {
+ BUG_ON(is_temporary);
+ svc_tcp_init_listener(svsk);
+ svc_sock_add_listener(svsk);
+ } else {
+ BUG_ON(!is_temporary);
+ svc_tcp_init_connection(svsk);
+ svc_sock_add_connection(svsk);
+ }
+
+ dprintk("svc: svc_setup_socket created %p (inet %p)\n",
+ svsk, svsk->sk_sk);
+ return svsk;
+}
+
+void svc_sock_init(struct svc_sock *svsk, struct svc_serv *serv)
+{
svsk->sk_server = serv;
atomic_set(&svsk->sk_inuse, 1);
svsk->sk_lastrecv = get_seconds();
@@ -1689,36 +1717,51 @@ static struct svc_sock *svc_setup_socket
INIT_LIST_HEAD(&svsk->sk_deferred);
INIT_LIST_HEAD(&svsk->sk_ready);
mutex_init(&svsk->sk_mutex);
+}
+EXPORT_SYMBOL_GPL(svc_sock_init);
- /* Initialize the socket */
- if (sock->type == SOCK_DGRAM)
- svc_udp_init(svsk);
- else
- svc_tcp_init(svsk);
+void svc_sock_add_connection(struct svc_sock *svsk)
+{
+ struct svc_serv *serv = svsk->sk_server;
spin_lock_bh(&serv->sv_lock);
- if (is_temporary) {
- set_bit(SK_TEMP, &svsk->sk_flags);
- list_add(&svsk->sk_list, &serv->sv_tempsocks);
- serv->sv_tmpcnt++;
- if (serv->sv_temptimer.function == NULL) {
- /* setup timer to age temp sockets */
- setup_timer(&serv->sv_temptimer, svc_age_temp_sockets,
- (unsigned long)serv);
- mod_timer(&serv->sv_temptimer,
- jiffies + svc_conn_age_period * HZ);
- }
- } else {
- clear_bit(SK_TEMP, &svsk->sk_flags);
- list_add(&svsk->sk_list, &serv->sv_permsocks);
+
+ set_bit(SK_TEMP, &svsk->sk_flags);
+ list_add(&svsk->sk_list, &serv->sv_tempsocks);
+ serv->sv_tmpcnt++;
+ if (serv->sv_temptimer.function == NULL) {
+ /* setup timer to age temp sockets */
+ setup_timer(&serv->sv_temptimer, svc_age_temp_sockets,
+ (unsigned long)serv);
+ mod_timer(&serv->sv_temptimer,
+ jiffies + svc_conn_age_period * HZ);
}
+
spin_unlock_bh(&serv->sv_lock);
+}
+EXPORT_SYMBOL_GPL(svc_sock_add_connection);
- dprintk("svc: svc_setup_socket created %p (inet %p)\n",
- svsk, svsk->sk_sk);
+static void svc_sock_add_permanent(struct svc_sock *svsk)
+{
+ struct svc_serv *serv = svsk->sk_server;
- return svsk;
+ BUG_ON(test_bit(SK_TEMP, &svsk->sk_flags));
+ spin_lock_bh(&serv->sv_lock);
+ list_add(&svsk->sk_list, &serv->sv_permsocks);
+ spin_unlock_bh(&serv->sv_lock);
+}
+
+void svc_sock_add_listener(struct svc_sock *svsk)
+{
+ svc_sock_add_permanent(svsk);
+}
+EXPORT_SYMBOL_GPL(svc_sock_add_listener);
+
+void svc_sock_add_connectionless(struct svc_sock *svsk)
+{
+ svc_sock_add_permanent(svsk);
}
+EXPORT_SYMBOL_GPL(svc_sock_add_connectionless);
int svc_addsock(struct svc_serv *serv,
int fd,
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Use a new svc_sock flag, SK_LISTENER, that is permanently set on
listener sockets. Use that to test for listeners in a way that
is not TCP-specific and does not assume the presence of a socket.
Signed-off-by: Greg Banks <[email protected]>
---
include/linux/sunrpc/svcsock.h | 1 +
net/sunrpc/svcsock.c | 3 ++-
2 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 7663578..ea8b62b 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -70,6 +70,7 @@ #define SK_CHNGBUF 7 /* need to change
#define SK_DEFERRED 8 /* request on sk_deferred */
#define SK_OLD 9 /* used for temp socket aging mark+sweep */
#define SK_DETACHED 10 /* detached from tempsocks list */
+#define SK_LISTENER 11 /* listener (e.g. TCP) socket */
atomic_t sk_reserved; /* space on outq that is reserved */
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 94eb921..dcb5c7a 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1368,6 +1368,7 @@ svc_tcp_init(struct svc_sock *svsk)
if (sk->sk_state == TCP_LISTEN) {
dprintk("setting up TCP socket for listening\n");
sk->sk_data_ready = svc_tcp_listen_data_ready;
+ set_bit(SK_LISTENER, &svsk->sk_flags);
set_bit(SK_CONN, &svsk->sk_flags);
} else {
dprintk("setting up TCP socket for reading\n");
@@ -1515,7 +1516,7 @@ svc_recv(struct svc_rqst *rqstp, long ti
if (test_bit(SK_CLOSE, &svsk->sk_flags)) {
dprintk("svc_recv: found SK_CLOSE\n");
svc_delete_socket(svsk);
- } else if (svsk->sk_sk->sk_state == TCP_LISTEN) {
+ } else if (test_bit(SK_LISTENER, &svsk->sk_flags)) {
svsk->sk_xprt->xpt_accept(svsk);
svc_sock_received(svsk);
} else {
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Export svc_sock_enqueue() and svc_sock_received() so they
can be used by sunrpc server transport implementations
(even future modular ones).
Signed-off-by: Greg Banks <[email protected]>
Signed-off-by: Peter Leckie <[email protected]>
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 2 ++
net/sunrpc/svcsock.c | 7 ++++---
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 4e24e6d..0145057 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -108,6 +108,8 @@ int svc_addsock(struct svc_serv *serv,
int fd,
char *name_return,
int *proto);
+void svc_sock_enqueue(struct svc_sock *svsk);
+void svc_sock_received(struct svc_sock *svsk);
/*
* svc_makesock socket characteristics
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 0dc94a8..8fad53d 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -209,7 +209,7 @@ svc_release_skb(struct svc_rqst *rqstp)
* processes, wake 'em up.
*
*/
-static void
+void
svc_sock_enqueue(struct svc_sock *svsk)
{
struct svc_serv *serv = svsk->sk_server;
@@ -287,6 +287,7 @@ svc_sock_enqueue(struct svc_sock *svsk)
out_unlock:
spin_unlock_bh(&pool->sp_lock);
}
+EXPORT_SYMBOL_GPL(svc_sock_enqueue);
/*
* Dequeue the first socket. Must be called with the pool->sp_lock held.
@@ -315,14 +316,14 @@ svc_sock_dequeue(struct svc_pool *pool)
* Note: SK_DATA only gets cleared when a read-attempt finds
* no (or insufficient) data.
*/
-static inline void
+void
svc_sock_received(struct svc_sock *svsk)
{
svsk->sk_pool = NULL;
clear_bit(SK_BUSY, &svsk->sk_flags);
svc_sock_enqueue(svsk);
}
-
+EXPORT_SYMBOL_GPL(svc_sock_received);
/**
* svc_reserve - change the space reserved for the reply to a request.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add a call to svc_register_transport for the built
in transports UDP and TCP. The registration is done in the
sunrpc module initialization logic.
Signed-off-by: Tom Tucker <[email protected]>
---
net/sunrpc/sunrpc_syms.c | 2 ++
net/sunrpc/svcsock.c | 10 ++++++++--
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c
index 73075de..c68577b 100644
--- a/net/sunrpc/sunrpc_syms.c
+++ b/net/sunrpc/sunrpc_syms.c
@@ -134,6 +134,7 @@ EXPORT_SYMBOL(nfsd_debug);
EXPORT_SYMBOL(nlm_debug);
#endif
+extern void init_svc_xprt(void);
extern struct cache_detail ip_map_cache, unix_gid_cache;
static int __init
@@ -156,6 +157,7 @@ #endif
cache_register(&ip_map_cache);
cache_register(&unix_gid_cache);
init_socket_xprt();
+ init_svc_xprt();
out:
return err;
}
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 6183951..d6443e8 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -933,7 +933,7 @@ svc_udp_has_wspace(struct svc_sock *svsk
return svc_sock_has_write_space(svsk, sock_wspace(svsk->sk_sk));
}
-static const struct svc_xprt svc_udp_xprt = {
+static struct svc_xprt svc_udp_xprt = {
.xpt_name = "udp",
.xpt_owner = THIS_MODULE,
.xpt_recvfrom = svc_udp_recvfrom,
@@ -1393,7 +1393,7 @@ svc_tcp_has_wspace(struct svc_sock *svsk
return svc_sock_has_write_space(svsk, sk_stream_wspace(svsk->sk_sk));
}
-static const struct svc_xprt svc_tcp_xprt = {
+static struct svc_xprt svc_tcp_xprt = {
.xpt_name = "tcp",
.xpt_owner = THIS_MODULE,
.xpt_recvfrom = svc_tcp_recvfrom,
@@ -1406,6 +1406,12 @@ static const struct svc_xprt svc_tcp_xpr
.xpt_accept = svc_tcp_accept,
};
+void init_svc_xprt(void)
+{
+ svc_register_transport(&svc_udp_xprt);
+ svc_register_transport(&svc_tcp_xprt);
+}
+
static void
svc_tcp_init_listener(struct svc_sock *svsk)
{
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add an exported function for transport modules to [un]register themselves
with the sunrpc server side transport switch.
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 6 +++++
net/sunrpc/svcsock.c | 50 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+), 0 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index 7def951..cc911ab 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -13,6 +13,7 @@ #include <linux/sunrpc/svc.h>
struct svc_xprt {
const char *xpt_name;
+ struct module *xpt_owner;
int (*xpt_recvfrom)(struct svc_rqst *rqstp);
int (*xpt_sendto)(struct svc_rqst *rqstp);
/*
@@ -45,7 +46,10 @@ struct svc_xprt {
* Accept a pending connection, for connection-oriented transports
*/
int (*xpt_accept)(struct svc_sock *svsk);
+ /* Transport list link */
+ struct list_head xpt_list;
};
+extern struct list_head svc_transport_list;
/*
* RPC server socket.
@@ -102,6 +106,8 @@ #define SK_LISTENER 11 /* listener (e.
/*
* Function prototypes.
*/
+int svc_register_transport(struct svc_xprt *xprt);
+int svc_unregister_transport(struct svc_xprt *xprt);
int svc_makesock(struct svc_serv *, int, unsigned short, int flags);
void svc_force_close_socket(struct svc_sock *);
int svc_recv(struct svc_rqst *, long);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 6acf22f..6183951 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -91,6 +91,54 @@ static struct svc_deferred_req *svc_defe
static int svc_deferred_recv(struct svc_rqst *rqstp);
static struct cache_deferred_req *svc_defer(struct cache_req *req);
+/* List of registered transports */
+static spinlock_t svc_transport_lock = SPIN_LOCK_UNLOCKED;
+LIST_HEAD(svc_transport_list);
+
+int svc_register_transport(struct svc_xprt *xprt)
+{
+ struct svc_xprt *ops;
+ int res;
+
+ dprintk("svc: Adding svc transport '%s'\n",
+ xprt->xpt_name);
+
+ res = -EEXIST;
+ INIT_LIST_HEAD(&xprt->xpt_list);
+ spin_lock(&svc_transport_lock);
+ list_for_each_entry(ops, &svc_transport_list, xpt_list) {
+ if (xprt == ops)
+ goto out;
+ }
+ list_add_tail(&xprt->xpt_list, &svc_transport_list);
+ res = 0;
+out:
+ spin_unlock(&svc_transport_lock);
+ return res;
+}
+EXPORT_SYMBOL_GPL(svc_register_transport);
+
+int svc_unregister_transport(struct svc_xprt *xprt)
+{
+ struct svc_xprt *ops;
+ int res = 0;
+
+ dprintk("svc: Removing svc transport '%s'\n", xprt->xpt_name);
+
+ spin_lock(&svc_transport_lock);
+ list_for_each_entry(ops, &svc_transport_list, xpt_list) {
+ if (xprt == ops) {
+ list_del_init(&ops->xpt_list);
+ goto out;
+ }
+ }
+ res = -ENOENT;
+ out:
+ spin_unlock(&svc_transport_lock);
+ return res;
+}
+EXPORT_SYMBOL_GPL(svc_unregister_transport);
+
/* apparently the "standard" is that clients close
* idle connections after 5 minutes, servers after
* 6 minutes
@@ -887,6 +935,7 @@ svc_udp_has_wspace(struct svc_sock *svsk
static const struct svc_xprt svc_udp_xprt = {
.xpt_name = "udp",
+ .xpt_owner = THIS_MODULE,
.xpt_recvfrom = svc_udp_recvfrom,
.xpt_sendto = svc_udp_sendto,
.xpt_detach = svc_sock_detach,
@@ -1346,6 +1395,7 @@ svc_tcp_has_wspace(struct svc_sock *svsk
static const struct svc_xprt svc_tcp_xprt = {
.xpt_name = "tcp",
+ .xpt_owner = THIS_MODULE,
.xpt_recvfrom = svc_tcp_recvfrom,
.xpt_sendto = svc_tcp_sendto,
.xpt_detach = svc_sock_detach,
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Create a proc/sys/sunrpc/transport file that contains information
about the currently registered transports.
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/debug.h | 1 +
net/sunrpc/svcsock.c | 28 ++++++++++++++++++++++++++++
net/sunrpc/sysctl.c | 40 +++++++++++++++++++++++++++++++++++++++-
3 files changed, 68 insertions(+), 1 deletions(-)
diff --git a/include/linux/sunrpc/debug.h b/include/linux/sunrpc/debug.h
index 10709cb..89458df 100644
--- a/include/linux/sunrpc/debug.h
+++ b/include/linux/sunrpc/debug.h
@@ -88,6 +88,7 @@ enum {
CTL_SLOTTABLE_TCP,
CTL_MIN_RESVPORT,
CTL_MAX_RESVPORT,
+ CTL_TRANSPORTS,
};
#endif /* _LINUX_SUNRPC_DEBUG_H_ */
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index d6443e8..276737e 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -139,6 +139,34 @@ int svc_unregister_transport(struct svc_
}
EXPORT_SYMBOL_GPL(svc_unregister_transport);
+/*
+ * Format the transport list for printing
+ */
+int svc_print_transports(char *buf, int maxlen)
+{
+ struct list_head *le;
+ char tmpstr[80];
+ int len = 0;
+ buf[0] = '\0';
+
+ spin_lock(&svc_transport_lock);
+ list_for_each(le, &svc_transport_list) {
+ int slen;
+ struct svc_xprt *xprt =
+ list_entry(le, struct svc_xprt, xpt_list);
+
+ sprintf(tmpstr, "%s %d\n", xprt->xpt_name, xprt->xpt_max_payload);
+ slen = strlen(tmpstr);
+ if (len + slen > maxlen)
+ break;
+ len += slen;
+ strcat(buf, tmpstr);
+ }
+ spin_unlock(&svc_transport_lock);
+
+ return len;
+}
+
/* apparently the "standard" is that clients close
* idle connections after 5 minutes, servers after
* 6 minutes
diff --git a/net/sunrpc/sysctl.c b/net/sunrpc/sysctl.c
index 738db32..683cf90 100644
--- a/net/sunrpc/sysctl.c
+++ b/net/sunrpc/sysctl.c
@@ -27,6 +27,9 @@ unsigned int nfs_debug;
unsigned int nfsd_debug;
unsigned int nlm_debug;
+/* Transport string */
+char xprt_buf[128];
+
#ifdef RPC_DEBUG
static struct ctl_table_header *sunrpc_table_header;
@@ -48,6 +51,34 @@ rpc_unregister_sysctl(void)
}
}
+int svc_print_transports(char *buf, int maxlen);
+static int proc_do_xprt(ctl_table *table, int write, struct file *file,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ char tmpbuf[128];
+ int len;
+ if ((*ppos && !write) || !*lenp) {
+ *lenp = 0;
+ return 0;
+ }
+
+ if (write)
+ return -EINVAL;
+ else {
+
+ len = svc_print_transports(tmpbuf, 128);
+ if (!access_ok(VERIFY_WRITE, buffer, len))
+ return -EFAULT;
+
+ if (__copy_to_user(buffer, tmpbuf, len))
+ return -EFAULT;
+ }
+
+ *lenp -= len;
+ *ppos += len;
+ return 0;
+}
+
static int
proc_dodebug(ctl_table *table, int write, struct file *file,
void __user *buffer, size_t *lenp, loff_t *ppos)
@@ -111,7 +142,6 @@ done:
return 0;
}
-
static ctl_table debug_table[] = {
{
.ctl_name = CTL_RPCDEBUG,
@@ -145,6 +175,14 @@ static ctl_table debug_table[] = {
.mode = 0644,
.proc_handler = &proc_dodebug
},
+ {
+ .ctl_name = CTL_TRANSPORTS,
+ .procname = "transports",
+ .data = xprt_buf,
+ .maxlen = sizeof(xprt_buf),
+ .mode = 0444,
+ .proc_handler = &proc_do_xprt,
+ },
{ .ctl_name = 0 }
};
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
The RDMA transport includes an ONCRDMA header that precedes the RPC
message. This header needs to be saved in addition to the RPC message
itself. The RPC transport uses page swapping to implement copy avoidance.
These transport dependencies are hidden in the xpt_defer routine allowing
the bulk of the deferral processing to remain in transport independent
code.
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 5 +++++
net/sunrpc/svcsock.c | 5 +++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index a920e9b..145c82b 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -51,6 +51,10 @@ struct svc_xprt {
* Accept a pending connection, for connection-oriented transports
*/
int (*xpt_accept)(struct svc_sock *svsk);
+
+ /* RPC defer routine. */
+ struct cache_deferred_req *(*xpt_defer)(struct cache_req *req);
+
/* Transport list link */
struct list_head xpt_list;
};
@@ -138,6 +142,7 @@ void svc_sock_add_connection(struct svc
void svc_sock_add_listener(struct svc_sock *);
/* Add an initialised connectionless svc_sock to the server */
void svc_sock_add_connectionless(struct svc_sock *);
+void svc_revisit(struct cache_deferred_req *dreq, int too_many);
/*
* svc_makesock socket characteristics
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 03ce7e9..b89c577 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1651,7 +1651,7 @@ svc_recv(struct svc_rqst *rqstp, long ti
clear_bit(SK_OLD, &svsk->sk_flags);
rqstp->rq_secure = svc_port_is_privileged(svc_addr(rqstp));
- rqstp->rq_chandle.defer = svc_defer;
+ rqstp->rq_chandle.defer = svsk->sk_xprt->xpt_defer;
if (serv->sv_stats)
serv->sv_stats->netcnt++;
@@ -2116,7 +2116,7 @@ EXPORT_SYMBOL_GPL(svc_create_svcsock);
* Handle defer and revisit of requests
*/
-static void svc_revisit(struct cache_deferred_req *dreq, int too_many)
+void svc_revisit(struct cache_deferred_req *dreq, int too_many)
{
struct svc_deferred_req *dr = container_of(dreq, struct svc_deferred_req, handle);
struct svc_sock *svsk;
@@ -2136,6 +2136,7 @@ static void svc_revisit(struct cache_def
svc_sock_enqueue(svsk);
svc_sock_put(svsk);
}
+EXPORT_SYMBOL_GPL(svc_revisit);
static struct cache_deferred_req *
svc_defer(struct cache_req *req)
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Update the write handler for the portlist file to allow creating new
listening endpoints on a transport. The general form of the string is:
<transport_name><space><port number>
For example:
tcp 2049
This is intended to support the creation of a listening endpoint for
RDMA transports without adding #ifdef code to the nfssvc.c file.
The "built-in" transports UDP/TCP were left in the nfssvc initialization
code to avoid having to change rpc.nfsd, etc...
Signed-off-by: Tom Tucker <[email protected]>
---
fs/nfsd/nfsctl.c | 17 +++++++++++++++++
1 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 71c686d..da2abda 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -555,6 +555,23 @@ static ssize_t write_ports(struct file *
kfree(toclose);
return len;
}
+ /* This implements the ability to add a transport by writing
+ * it's transport name to the portlist file
+ */
+ if (isalnum(buf[0])) {
+ int err;
+ char transport[16];
+ int port;
+ if (sscanf(buf, "%15s %4d", transport, &port) == 2) {
+ err = nfsd_create_serv();
+ if (!err)
+ err = svc_create_svcsock(nfsd_serv,
+ transport, port,
+ SVC_SOCK_ANONYMOUS);
+ return err < 0 ? err : 0;
+ }
+ }
+
return -EINVAL;
}
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add an transport function to return a formatted socket name.
This is used by knfsd when satisfying reads to the portlist file.
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 1 +
net/sunrpc/svcsock.c | 17 ++++++++++++-----
2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index e2d0256..a920e9b 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -18,6 +18,7 @@ struct svc_xprt {
int (*xpt_create_svc)(struct svc_serv *,
struct sockaddr *,
int);
+ int (*xpt_get_name)(char *, struct svc_sock*);
int (*xpt_recvfrom)(struct svc_rqst *rqstp);
int (*xpt_sendto)(struct svc_rqst *rqstp);
/*
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 44d6484..03ce7e9 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -622,10 +622,7 @@ out:
return len;
}
-/*
- * Report socket names for nfsdfs
- */
-static int one_sock_name(char *buf, struct svc_sock *svsk)
+static int svc_sock_get_name(char *buf, struct svc_sock *svsk)
{
int len;
@@ -639,11 +636,19 @@ static int one_sock_name(char *buf, stru
break;
default:
len = sprintf(buf, "*unknown-%d*\n",
- svsk->sk_sk->sk_family);
+ svsk->sk_sk->sk_family);
}
return len;
}
+/*
+ * Report socket names for nfsdfs
+ */
+static int one_sock_name(char *buf, struct svc_sock *svsk)
+{
+ return svsk->sk_xprt->xpt_get_name(buf,svsk);
+}
+
int
svc_sock_names(char *buf, struct svc_serv *serv, char *toclose)
{
@@ -975,6 +980,7 @@ static struct svc_xprt svc_udp_xprt = {
.xpt_name = "udp",
.xpt_owner = THIS_MODULE,
.xpt_create_svc = svc_udp_create_svc,
+ .xpt_get_name = svc_sock_get_name,
.xpt_recvfrom = svc_udp_recvfrom,
.xpt_sendto = svc_udp_sendto,
.xpt_detach = svc_sock_detach,
@@ -1443,6 +1449,7 @@ static struct svc_xprt svc_tcp_xprt = {
.xpt_name = "tcp",
.xpt_owner = THIS_MODULE,
.xpt_create_svc = svc_tcp_create_svc,
+ .xpt_get_name = svc_sock_get_name,
.xpt_recvfrom = svc_tcp_recvfrom,
.xpt_sendto = svc_tcp_sendto,
.xpt_detach = svc_sock_detach,
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add transport function that makes the creation of a listening endpoint
transport independent.
Signed-off-by: Tom Tucker <[email protected]>
---
include/linux/sunrpc/svcsock.h | 5 +++
net/sunrpc/svcsock.c | 65 +++++++++++++++++++++++++++++++++++++---
2 files changed, 65 insertions(+), 5 deletions(-)
diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h
index cc911ab..e2d0256 100644
--- a/include/linux/sunrpc/svcsock.h
+++ b/include/linux/sunrpc/svcsock.h
@@ -14,6 +14,10 @@ #include <linux/sunrpc/svc.h>
struct svc_xprt {
const char *xpt_name;
struct module *xpt_owner;
+ /* Create an svc socket for this transport */
+ int (*xpt_create_svc)(struct svc_serv *,
+ struct sockaddr *,
+ int);
int (*xpt_recvfrom)(struct svc_rqst *rqstp);
int (*xpt_sendto)(struct svc_rqst *rqstp);
/*
@@ -109,6 +113,7 @@ #define SK_LISTENER 11 /* listener (e.
int svc_register_transport(struct svc_xprt *xprt);
int svc_unregister_transport(struct svc_xprt *xprt);
int svc_makesock(struct svc_serv *, int, unsigned short, int flags);
+int svc_create_svcsock(struct svc_serv *, char *, unsigned short, int);
void svc_force_close_socket(struct svc_sock *);
int svc_recv(struct svc_rqst *, long);
int svc_send(struct svc_rqst *);
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 276737e..44d6484 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -87,6 +87,8 @@ static void svc_close_socket(struct svc
static void svc_sock_detach(struct svc_sock *);
static void svc_sock_free(struct svc_sock *);
+static int
+svc_create_socket(struct svc_serv *, int, struct sockaddr *, int, int);
static struct svc_deferred_req *svc_deferred_dequeue(struct svc_sock *svsk);
static int svc_deferred_recv(struct svc_rqst *rqstp);
static struct cache_deferred_req *svc_defer(struct cache_req *req);
@@ -434,6 +436,7 @@ __svc_sock_put(struct svc_sock *svsk)
if (svsk->sk_info_authunix != NULL)
svcauth_unix_info_release(svsk->sk_info_authunix);
+ module_put(svsk->sk_xprt->xpt_owner);
svsk->sk_xprt->xpt_free(svsk);
}
EXPORT_SYMBOL_GPL(__svc_sock_put);
@@ -961,9 +964,17 @@ svc_udp_has_wspace(struct svc_sock *svsk
return svc_sock_has_write_space(svsk, sock_wspace(svsk->sk_sk));
}
+static int
+svc_udp_create_svc(struct svc_serv *serv, struct sockaddr *sa, int flags)
+{
+ return svc_create_socket(serv, IPPROTO_UDP, sa,
+ sizeof(struct sockaddr_in), flags);
+}
+
static struct svc_xprt svc_udp_xprt = {
.xpt_name = "udp",
.xpt_owner = THIS_MODULE,
+ .xpt_create_svc = svc_udp_create_svc,
.xpt_recvfrom = svc_udp_recvfrom,
.xpt_sendto = svc_udp_sendto,
.xpt_detach = svc_sock_detach,
@@ -1421,9 +1432,17 @@ svc_tcp_has_wspace(struct svc_sock *svsk
return svc_sock_has_write_space(svsk, sk_stream_wspace(svsk->sk_sk));
}
+static int
+svc_tcp_create_svc(struct svc_serv *serv, struct sockaddr *sa, int flags)
+{
+ return svc_create_socket(serv, IPPROTO_TCP, sa,
+ sizeof(struct sockaddr_in), flags);
+}
+
static struct svc_xprt svc_tcp_xprt = {
.xpt_name = "tcp",
.xpt_owner = THIS_MODULE,
+ .xpt_create_svc = svc_tcp_create_svc,
.xpt_recvfrom = svc_tcp_recvfrom,
.xpt_sendto = svc_tcp_sendto,
.xpt_detach = svc_sock_detach,
@@ -1606,6 +1625,7 @@ svc_recv(struct svc_rqst *rqstp, long ti
svc_delete_socket(svsk);
} else if (test_bit(SK_LISTENER, &svsk->sk_flags)) {
svsk->sk_xprt->xpt_accept(svsk);
+ __module_get(svsk->sk_xprt->xpt_owner);
svc_sock_received(svsk);
} else {
dprintk("svc: server %p, pool %u, socket %p, inuse=%d\n",
@@ -1885,7 +1905,7 @@ EXPORT_SYMBOL_GPL(svc_addsock);
* Create socket for RPC service.
*/
static int svc_create_socket(struct svc_serv *serv, int protocol,
- struct sockaddr *sin, int len, int flags)
+ struct sockaddr *sin, int len, int flags)
{
struct svc_sock *svsk;
struct socket *sock;
@@ -2037,18 +2057,53 @@ void svc_force_close_socket(struct svc_s
*
*/
int svc_makesock(struct svc_serv *serv, int protocol, unsigned short port,
- int flags)
+ int flags)
{
+ dprintk("svc: creating socket proto = %d\n", protocol);
+ switch (protocol) {
+ case IPPROTO_TCP:
+ return svc_create_svcsock(serv, "tcp", port, flags);
+ case IPPROTO_UDP:
+ return svc_create_svcsock(serv, "udp", port, flags);
+ default:
+ return -EINVAL;
+ }
+}
+
+int svc_create_svcsock(struct svc_serv *serv, char *transport, unsigned short port,
+ int flags)
+{
+ int ret = -ENOENT;
+ struct list_head *le;
struct sockaddr_in sin = {
.sin_family = AF_INET,
.sin_addr.s_addr = INADDR_ANY,
.sin_port = htons(port),
};
+ dprintk("svc: creating transport socket %s[%d]\n", transport, port);
+ spin_lock(&svc_transport_lock);
+ list_for_each(le, &svc_transport_list) {
+ struct svc_xprt *xprt =
+ list_entry(le, struct svc_xprt, xpt_list);
- dprintk("svc: creating socket proto = %d\n", protocol);
- return svc_create_socket(serv, protocol, (struct sockaddr *) &sin,
- sizeof(sin), flags);
+ if (strcmp(transport, xprt->xpt_name)==0) {
+ spin_unlock(&svc_transport_lock);
+ if (try_module_get(xprt->xpt_owner)) {
+ ret = xprt->xpt_create_svc(serv,
+ (struct sockaddr*)&sin,
+ flags);
+ if (ret < 0)
+ module_put(xprt->xpt_owner);
+ goto out;
+ }
+ }
+ }
+ spin_unlock(&svc_transport_lock);
+ dprintk("svc: transport %s not found\n", transport);
+ out:
+ return ret;
}
+EXPORT_SYMBOL_GPL(svc_create_svcsock);
/*
* Handle defer and revisit of requests
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Change the call to the socket specific svc_makesock to a call to the
transport independent svc_create_svcsock function. This avoids
conditional #ifdef's for the rdma transport in the nfsd code.
Signed-off-by: Tom Tucker <[email protected]>
---
fs/nfsd/nfssvc.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index ff55950..1499beb 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -235,8 +235,8 @@ static int nfsd_init_socks(int port)
error = lockd_up(IPPROTO_UDP);
if (error >= 0) {
- error = svc_makesock(nfsd_serv, IPPROTO_UDP, port,
- SVC_SOCK_DEFAULTS);
+ error = svc_create_svcsock(nfsd_serv, "udp", port,
+ SVC_SOCK_DEFAULTS);
if (error < 0)
lockd_down();
}
@@ -246,8 +246,8 @@ static int nfsd_init_socks(int port)
#ifdef CONFIG_NFSD_TCP
error = lockd_up(IPPROTO_TCP);
if (error >= 0) {
- error = svc_makesock(nfsd_serv, IPPROTO_TCP, port,
- SVC_SOCK_DEFAULTS);
+ error = svc_create_svcsock(nfsd_serv, "tcp", port,
+ SVC_SOCK_DEFAULTS);
if (error < 0)
lockd_down();
}
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs