I'd like these 25 patches to be considered for inclusion in 2.6.19.
1. A patch that increases the minimum allowed port number to avoid a port
used by some IPMI implementations.
2. A set of patches that provides a clean API for supporting per-transport
RPC bind implementations. It also improves scalability by replacing the
single portmapper spinlock with a per-transport bit lock.
3. A set of patches that replaces the two-call API for instantiating an RPC
transport with a single-call API that allows for remote addresses of
arbitrary size.
4. A set of patches that modifies the NFS client's symlink creation logic
to use the page cache. This is the only NFS operation that makes use of
large RPC buffers; using the page cache instead means it can use small
RPC buffers now.
Trond, these are patches you reviewed two weeks ago, with some modifications
we've discussed, and one or two minor additions. Over the past months these
have been tested, in various forms, in many different environments.
Any other comments or review are welcome.
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Some hardware uses port 664 for its hardware-based IPMI listener. Teach
the RPC client to avoid using that port by raising the default minimum port
number to 665.
Test plan:
Find a mainboard known to use port 664 for IPMI; enable IPMI; mount NFS
servers in a tight loop.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/xprt.h | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 840e47a..3a0cca2 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -37,7 +37,7 @@ extern unsigned int xprt_max_resvport;
#define RPC_MIN_RESVPORT (1U)
#define RPC_MAX_RESVPORT (65535U)
-#define RPC_DEF_MIN_RESVPORT (650U)
+#define RPC_DEF_MIN_RESVPORT (665U)
#define RPC_DEF_MAX_RESVPORT (1023U)
/*
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Hide the contents and format of xprt->addr by eliminating direct uses
of the xprt->addr.sin_port field. This change is required to support
alternate RPC host address formats (eg IPv6).
Test-plan:
Destructive testing (unplugging the network temporarily). Repeated runs of
Connectathon locking suite with UDP and TCP.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/xprt.h | 16 ++++++++++++++++
net/sunrpc/clnt.c | 10 +++++-----
net/sunrpc/xprt.c | 6 +++++-
net/sunrpc/xprtsock.c | 16 ++++++++++++----
4 files changed, 38 insertions(+), 10 deletions(-)
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 3a0cca2..e65474f 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -269,6 +269,7 @@ #define XPRT_LOCKED (0)
#define XPRT_CONNECTED (1)
#define XPRT_CONNECTING (2)
#define XPRT_CLOSE_WAIT (3)
+#define XPRT_BOUND (4)
static inline void xprt_set_connected(struct rpc_xprt *xprt)
{
@@ -312,6 +313,21 @@ static inline int xprt_test_and_set_conn
return test_and_set_bit(XPRT_CONNECTING, &xprt->state);
}
+static inline void xprt_set_bound(struct rpc_xprt *xprt)
+{
+ set_bit(XPRT_BOUND, &xprt->state);
+}
+
+static inline int xprt_bound(struct rpc_xprt *xprt)
+{
+ return test_bit(XPRT_BOUND, &xprt->state);
+}
+
+static inline void xprt_clear_bound(struct rpc_xprt *xprt)
+{
+ clear_bit(XPRT_BOUND, &xprt->state);
+}
+
#endif /* __KERNEL__*/
#endif /* _LINUX_SUNRPC_XPRT_H */
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index d6409e7..4f353dd 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -148,7 +148,6 @@ rpc_new_client(struct rpc_xprt *xprt, ch
clnt->cl_maxproc = version->nrprocs;
clnt->cl_protname = program->name;
clnt->cl_pmap = &clnt->cl_pmap_default;
- clnt->cl_port = xprt->addr.sin_port;
clnt->cl_prog = program->number;
clnt->cl_vers = version->number;
clnt->cl_prot = xprt->prot;
@@ -156,7 +155,7 @@ rpc_new_client(struct rpc_xprt *xprt, ch
clnt->cl_metrics = rpc_alloc_iostats(clnt);
rpc_init_wait_queue(&clnt->cl_pmap_default.pm_bindwait, "bindwait");
- if (!clnt->cl_port)
+ if (!xprt_bound(clnt->cl_xprt))
clnt->cl_autobind = 1;
clnt->cl_rtt = &clnt->cl_rtt_default;
@@ -573,7 +572,7 @@ EXPORT_SYMBOL(rpc_max_payload);
void rpc_force_rebind(struct rpc_clnt *clnt)
{
if (clnt->cl_autobind)
- clnt->cl_port = 0;
+ xprt_clear_bound(clnt->cl_xprt);
}
EXPORT_SYMBOL(rpc_force_rebind);
@@ -785,14 +784,15 @@ static void
call_bind(struct rpc_task *task)
{
struct rpc_clnt *clnt = task->tk_client;
+ struct rpc_xprt *xprt = task->tk_xprt;
dprintk("RPC: %4d call_bind (status %d)\n",
task->tk_pid, task->tk_status);
task->tk_action = call_connect;
- if (!clnt->cl_port) {
+ if (!xprt_bound(xprt)) {
task->tk_action = call_bind_status;
- task->tk_timeout = task->tk_xprt->bind_timeout;
+ task->tk_timeout = xprt->bind_timeout;
rpc_getport(task, clnt);
}
}
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index e8c2bc4..10ba1f6 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -534,7 +534,11 @@ void xprt_connect(struct rpc_task *task)
dprintk("RPC: %4d xprt_connect xprt %p %s connected\n", task->tk_pid,
xprt, (xprt_connected(xprt) ? "is" : "is not"));
- if (!xprt->addr.sin_port) {
+ if (xprt->shutdown) {
+ task->tk_status = -EIO;
+ return;
+ }
+ if (!xprt_bound(xprt)) {
task->tk_status = -EIO;
return;
}
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 441bd53..43b59c2 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -974,6 +974,8 @@ static void xs_set_port(struct rpc_xprt
{
dprintk("RPC: setting port for xprt %p to %u\n", xprt, port);
xprt->addr.sin_port = htons(port);
+ if (port != 0)
+ xprt_set_bound(xprt);
}
static int xs_bindresvport(struct rpc_xprt *xprt, struct socket *sock)
@@ -1016,7 +1018,7 @@ static void xs_udp_connect_worker(void *
struct socket *sock = xprt->sock;
int err, status = -EIO;
- if (xprt->shutdown || xprt->addr.sin_port == 0)
+ if (xprt->shutdown || !xprt_bound(xprt))
goto out;
dprintk("RPC: xs_udp_connect_worker for xprt %p\n", xprt);
@@ -1099,7 +1101,7 @@ static void xs_tcp_connect_worker(void *
struct socket *sock = xprt->sock;
int err, status = -EIO;
- if (xprt->shutdown || xprt->addr.sin_port == 0)
+ if (xprt->shutdown || !xprt_bound(xprt))
goto out;
dprintk("RPC: xs_tcp_connect_worker for xprt %p\n", xprt);
@@ -1307,8 +1309,11 @@ int xs_setup_udp(struct rpc_xprt *xprt,
if (xprt->slot == NULL)
return -ENOMEM;
- xprt->prot = IPPROTO_UDP;
+ if (ntohs(xprt->addr.sin_port) != 0)
+ xprt_set_bound(xprt);
xprt->port = xs_get_random_port();
+
+ xprt->prot = IPPROTO_UDP;
xprt->tsh_size = 0;
xprt->resvport = capable(CAP_NET_BIND_SERVICE) ? 1 : 0;
/* XXX: header size can vary due to auth type, IPv6, etc. */
@@ -1348,8 +1353,11 @@ int xs_setup_tcp(struct rpc_xprt *xprt,
if (xprt->slot == NULL)
return -ENOMEM;
- xprt->prot = IPPROTO_TCP;
+ if (ntohs(xprt->addr.sin_port) != 0)
+ xprt_set_bound(xprt);
xprt->port = xs_get_random_port();
+
+ xprt->prot = IPPROTO_TCP;
xprt->tsh_size = sizeof(rpc_fraghdr) / sizeof(u32);
xprt->resvport = capable(CAP_NET_BIND_SERVICE) ? 1 : 0;
xprt->max_payload = RPC_MAX_FRAGMENT_SIZE;
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Move connection and bind state that was maintained in the rpc_clnt
structure to the rpc_xprt structure. This will allow the creation of
a clean API for plugging in different types of bind mechanisms.
This brings improvements such as the elimination of a single spin lock to
control serialization for all in-kernel RPC binding. A set of per-xprt
bitops is used to serialize tasks during RPC binding, just like it now
works for making RPC transport connections.
Test-plan:
Destructive testing (unplugging the network temporarily). Connectathon
with UDP and TCP. NFSv2/3 and NFSv4 mounting should be carefully checked.
Probably need to rig a server where certain services aren't running, or
that returns an error for some typical operation.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/clnt.h | 23 +-----
include/linux/sunrpc/xprt.h | 14 ++++
net/sunrpc/clnt.c | 8 --
net/sunrpc/pmap_clnt.c | 158 ++++++++++++++++++++++++++++---------------
net/sunrpc/xprt.c | 1
5 files changed, 123 insertions(+), 81 deletions(-)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index 8fe9f35..00e9dba 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -18,18 +18,6 @@ #include <linux/sunrpc/xdr.h>
#include <linux/sunrpc/timer.h>
#include <asm/signal.h>
-/*
- * This defines an RPC port mapping
- */
-struct rpc_portmap {
- __u32 pm_prog;
- __u32 pm_vers;
- __u32 pm_prot;
- __u16 pm_port;
- unsigned char pm_binding : 1; /* doing a getport() */
- struct rpc_wait_queue pm_bindwait; /* waiting on getport() */
-};
-
struct rpc_inode;
/*
@@ -40,7 +28,9 @@ struct rpc_clnt {
atomic_t cl_users; /* number of references */
struct rpc_xprt * cl_xprt; /* transport */
struct rpc_procinfo * cl_procinfo; /* procedure info */
- u32 cl_maxproc; /* max procedure number */
+ u32 cl_prog, /* RPC program number */
+ cl_vers, /* RPC version number */
+ cl_maxproc; /* max procedure number */
char * cl_server; /* server machine name */
char * cl_protname; /* protocol name */
@@ -55,7 +45,6 @@ struct rpc_clnt {
cl_dead : 1;/* abandoned */
struct rpc_rtt * cl_rtt; /* RTO estimator data */
- struct rpc_portmap * cl_pmap; /* port mapping */
int cl_nodelen; /* nodename length */
char cl_nodename[UNX_MAXNODENAME];
@@ -64,14 +53,8 @@ struct rpc_clnt {
struct dentry * cl_dentry; /* inode */
struct rpc_clnt * cl_parent; /* Points to parent of clones */
struct rpc_rtt cl_rtt_default;
- struct rpc_portmap cl_pmap_default;
char cl_inline_name[32];
};
-#define cl_timeout cl_xprt->timeout
-#define cl_prog cl_pmap->pm_prog
-#define cl_vers cl_pmap->pm_vers
-#define cl_port cl_pmap->pm_port
-#define cl_prot cl_pmap->pm_prot
/*
* General RPC program info
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index e65474f..445e554 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -138,6 +138,7 @@ struct rpc_xprt {
unsigned int tsh_size; /* size of transport specific
header */
+ struct rpc_wait_queue binding; /* requests waiting on rpcbind */
struct rpc_wait_queue sending; /* requests waiting to send */
struct rpc_wait_queue resend; /* requests waiting to resend */
struct rpc_wait_queue pending; /* requests in flight */
@@ -270,6 +271,7 @@ #define XPRT_CONNECTED (1)
#define XPRT_CONNECTING (2)
#define XPRT_CLOSE_WAIT (3)
#define XPRT_BOUND (4)
+#define XPRT_BINDING (5)
static inline void xprt_set_connected(struct rpc_xprt *xprt)
{
@@ -328,6 +330,18 @@ static inline void xprt_clear_bound(stru
clear_bit(XPRT_BOUND, &xprt->state);
}
+static inline void xprt_clear_binding(struct rpc_xprt *xprt)
+{
+ smp_mb__before_clear_bit();
+ clear_bit(XPRT_BINDING, &xprt->state);
+ smp_mb__after_clear_bit();
+}
+
+static inline int xprt_test_and_set_binding(struct rpc_xprt *xprt)
+{
+ return test_and_set_bit(XPRT_BINDING, &xprt->state);
+}
+
#endif /* __KERNEL__*/
#endif /* _LINUX_SUNRPC_XPRT_H */
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 4f353dd..87008ff 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -147,13 +147,10 @@ rpc_new_client(struct rpc_xprt *xprt, ch
clnt->cl_procinfo = version->procs;
clnt->cl_maxproc = version->nrprocs;
clnt->cl_protname = program->name;
- clnt->cl_pmap = &clnt->cl_pmap_default;
clnt->cl_prog = program->number;
clnt->cl_vers = version->number;
- clnt->cl_prot = xprt->prot;
clnt->cl_stats = program->stats;
clnt->cl_metrics = rpc_alloc_iostats(clnt);
- rpc_init_wait_queue(&clnt->cl_pmap_default.pm_bindwait, "bindwait");
if (!xprt_bound(clnt->cl_xprt))
clnt->cl_autobind = 1;
@@ -244,8 +241,6 @@ rpc_clone_client(struct rpc_clnt *clnt)
atomic_set(&new->cl_users, 0);
new->cl_parent = clnt;
atomic_inc(&clnt->cl_count);
- /* Duplicate portmapper */
- rpc_init_wait_queue(&new->cl_pmap_default.pm_bindwait, "bindwait");
/* Turn off autobind on clones */
new->cl_autobind = 0;
new->cl_oneshot = 0;
@@ -257,8 +252,7 @@ rpc_clone_client(struct rpc_clnt *clnt)
rpc_init_rtt(&new->cl_rtt_default, clnt->cl_xprt->timeout.to_initval);
if (new->cl_auth)
atomic_inc(&new->cl_auth->au_count);
- new->cl_pmap = &new->cl_pmap_default;
- new->cl_metrics = rpc_alloc_iostats(clnt);
+ new->cl_metrics = rpc_alloc_iostats(clnt);
return new;
out_no_clnt:
printk(KERN_INFO "RPC: out of memory in %s\n", __FUNCTION__);
diff --git a/net/sunrpc/pmap_clnt.c b/net/sunrpc/pmap_clnt.c
index 623180f..1dad361 100644
--- a/net/sunrpc/pmap_clnt.c
+++ b/net/sunrpc/pmap_clnt.c
@@ -24,11 +24,57 @@ #define PMAP_SET 1
#define PMAP_UNSET 2
#define PMAP_GETPORT 3
+struct portmap_args {
+ u32 pm_prog;
+ u32 pm_vers;
+ u32 pm_prot;
+ unsigned short pm_port;
+ struct rpc_task * pm_task;
+};
+
static struct rpc_procinfo pmap_procedures[];
static struct rpc_clnt * pmap_create(char *, struct sockaddr_in *, int, int);
-static void pmap_getport_done(struct rpc_task *);
+static void pmap_getport_done(struct rpc_task *, void *);
static struct rpc_program pmap_program;
-static DEFINE_SPINLOCK(pmap_lock);
+
+static void pmap_getport_prepare(struct rpc_task *task, void *calldata)
+{
+ struct portmap_args *map = calldata;
+ struct rpc_message msg = {
+ .rpc_proc = &pmap_procedures[PMAP_GETPORT],
+ .rpc_argp = map,
+ .rpc_resp = &map->pm_port,
+ };
+
+ rpc_call_setup(task, &msg, 0);
+}
+
+static inline struct portmap_args *pmap_map_alloc(void)
+{
+ return kmalloc(sizeof(struct portmap_args), GFP_NOFS);
+}
+
+static inline void pmap_map_free(struct portmap_args *map)
+{
+ kfree(map);
+}
+
+static void pmap_map_release(void *data)
+{
+ pmap_map_free(data);
+}
+
+static const struct rpc_call_ops pmap_getport_ops = {
+ .rpc_call_prepare = pmap_getport_prepare,
+ .rpc_call_done = pmap_getport_done,
+ .rpc_release = pmap_map_release,
+};
+
+static inline void pmap_wake_portmap_waiters(struct rpc_xprt *xprt)
+{
+ xprt_clear_binding(xprt);
+ rpc_wake_up(&xprt->binding);
+}
/*
* Obtain the port for a given RPC service on a given host. This one can
@@ -37,67 +83,71 @@ static DEFINE_SPINLOCK(pmap_lock);
void
rpc_getport(struct rpc_task *task, struct rpc_clnt *clnt)
{
- struct rpc_portmap *map = clnt->cl_pmap;
- struct sockaddr_in *sap = &clnt->cl_xprt->addr;
- struct rpc_message msg = {
- .rpc_proc = &pmap_procedures[PMAP_GETPORT],
- .rpc_argp = map,
- .rpc_resp = &clnt->cl_port,
- .rpc_cred = NULL
- };
+ struct rpc_xprt *xprt = task->tk_xprt;
+ struct sockaddr_in *sap = &xprt->addr;
+ struct portmap_args *map;
struct rpc_clnt *pmap_clnt;
- struct rpc_task *child;
+ struct rpc_task *child;
- dprintk("RPC: %4d rpc_getport(%s, %d, %d, %d)\n",
+ dprintk("RPC: %4d rpc_getport(%s, %u, %u, %d)\n",
task->tk_pid, clnt->cl_server,
- map->pm_prog, map->pm_vers, map->pm_prot);
+ clnt->cl_prog, clnt->cl_vers, xprt->prot);
/* Autobind on cloned rpc clients is discouraged */
BUG_ON(clnt->cl_parent != clnt);
- spin_lock(&pmap_lock);
- if (map->pm_binding) {
- rpc_sleep_on(&map->pm_bindwait, task, NULL, NULL);
- spin_unlock(&pmap_lock);
+ if (xprt_test_and_set_binding(xprt)) {
+ task->tk_status = -EACCES; /* tell caller to check again */
+ rpc_sleep_on(&xprt->binding, task, NULL, NULL);
return;
}
- map->pm_binding = 1;
- spin_unlock(&pmap_lock);
+
+ /* Someone else may have bound if we slept */
+ if (xprt_bound(xprt)) {
+ task->tk_status = 0;
+ goto bailout_nofree;
+ }
+
+ map = pmap_map_alloc();
+ if (!map) {
+ task->tk_status = -ENOMEM;
+ goto bailout_nofree;
+ }
+ map->pm_prog = clnt->cl_prog;
+ map->pm_vers = clnt->cl_vers;
+ map->pm_prot = xprt->prot;
+ map->pm_port = 0;
+ map->pm_task = task;
pmap_clnt = pmap_create(clnt->cl_server, sap, map->pm_prot, 0);
if (IS_ERR(pmap_clnt)) {
task->tk_status = PTR_ERR(pmap_clnt);
goto bailout;
}
- task->tk_status = 0;
- /*
- * Note: rpc_new_child will release client after a failure.
- */
- if (!(child = rpc_new_child(pmap_clnt, task)))
+ child = rpc_run_task(pmap_clnt, RPC_TASK_ASYNC, &pmap_getport_ops, map);
+ if (IS_ERR(child)) {
+ task->tk_status = -EIO;
goto bailout;
+ }
+ rpc_release_task(child);
- /* Setup the call info struct */
- rpc_call_setup(child, &msg, 0);
+ rpc_sleep_on(&xprt->binding, task, NULL, NULL);
- /* ... and run the child task */
task->tk_xprt->stat.bind_count++;
- rpc_run_child(task, child, pmap_getport_done);
return;
bailout:
- spin_lock(&pmap_lock);
- map->pm_binding = 0;
- rpc_wake_up(&map->pm_bindwait);
- spin_unlock(&pmap_lock);
- rpc_exit(task, -EIO);
+ pmap_map_free(map);
+bailout_nofree:
+ pmap_wake_portmap_waiters(xprt);
}
#ifdef CONFIG_ROOT_NFS
int
rpc_getport_external(struct sockaddr_in *sin, __u32 prog, __u32 vers, int prot)
{
- struct rpc_portmap map = {
+ struct portmap_args map = {
.pm_prog = prog,
.pm_vers = vers,
.pm_prot = prot,
@@ -133,30 +183,30 @@ rpc_getport_external(struct sockaddr_in
#endif
static void
-pmap_getport_done(struct rpc_task *task)
+pmap_getport_done(struct rpc_task *child, void *data)
{
- struct rpc_clnt *clnt = task->tk_client;
+ struct portmap_args *map = data;
+ struct rpc_task *task = map->pm_task;
struct rpc_xprt *xprt = task->tk_xprt;
- struct rpc_portmap *map = clnt->cl_pmap;
-
- dprintk("RPC: %4d pmap_getport_done(status %d, port %d)\n",
- task->tk_pid, task->tk_status, clnt->cl_port);
+ int status = child->tk_status;
xprt->ops->set_port(xprt, 0);
- if (task->tk_status < 0) {
- /* Make the calling task exit with an error */
- task->tk_action = rpc_exit_task;
- } else if (clnt->cl_port == 0) {
- /* Program not registered */
- rpc_exit(task, -EACCES);
+ if (status < 0) {
+ /* Portmapper not available */
+ task->tk_status = status;
+ } else if (map->pm_port == 0) {
+ /* Requested RPC service wasn't registered */
+ task->tk_status = -EACCES;
} else {
- xprt->ops->set_port(xprt, clnt->cl_port);
- clnt->cl_port = htons(clnt->cl_port);
+ /* Succeeded */
+ xprt->ops->set_port(xprt, map->pm_port);
+ task->tk_status = 0;
}
- spin_lock(&pmap_lock);
- map->pm_binding = 0;
- rpc_wake_up(&map->pm_bindwait);
- spin_unlock(&pmap_lock);
+
+ dprintk("RPC: %4d pmap_getport_done(status %d, port %u)\n",
+ child->tk_pid, child->tk_status, map->pm_port);
+
+ pmap_wake_portmap_waiters(xprt);
}
/*
@@ -170,7 +220,7 @@ rpc_register(u32 prog, u32 vers, int pro
.sin_family = AF_INET,
.sin_addr.s_addr = htonl(INADDR_LOOPBACK),
};
- struct rpc_portmap map = {
+ struct portmap_args map = {
.pm_prog = prog,
.pm_vers = vers,
.pm_prot = prot,
@@ -236,7 +286,7 @@ pmap_create(char *hostname, struct socka
* XDR encode/decode functions for PMAP
*/
static int
-xdr_encode_mapping(struct rpc_rqst *req, u32 *p, struct rpc_portmap *map)
+xdr_encode_mapping(struct rpc_rqst *req, u32 *p, struct portmap_args *map)
{
dprintk("RPC: xdr_encode_mapping(%d, %d, %d, %d)\n",
map->pm_prog, map->pm_vers, map->pm_prot, map->pm_port);
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index 10ba1f6..e35444e 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -932,6 +932,7 @@ static struct rpc_xprt *xprt_setup(int p
xprt->last_used = jiffies;
xprt->cwnd = RPC_INITCWND;
+ rpc_init_wait_queue(&xprt->binding, "xprt_binding");
rpc_init_wait_queue(&xprt->pending, "xprt_pending");
rpc_init_wait_queue(&xprt->sending, "xprt_sending");
rpc_init_wait_queue(&xprt->resend, "xprt_resend");
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Introduce a clean transport switch API for plugging in different types of
rpcbind mechanisms. For instance, rpcbind can cleanly replace the
existing portmapper client, or a transport can choose to implement RPC
binding any way it likes.
Test plan:
Destructive testing (unplugging the network temporarily). Connectathon
with UDP and TCP. NFSv2/3 and NFSv4 mounting should be carefully checked.
Probably need to rig a server where certain services aren't running, or
that returns an error for some typical operation.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/clnt.h | 2 +-
include/linux/sunrpc/xprt.h | 1 +
net/sunrpc/clnt.c | 3 +--
net/sunrpc/pmap_clnt.c | 4 ++--
net/sunrpc/xprtsock.c | 2 ++
5 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index 00e9dba..2e68ac0 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -106,7 +106,7 @@ struct rpc_clnt *rpc_clone_client(struct
int rpc_shutdown_client(struct rpc_clnt *);
int rpc_destroy_client(struct rpc_clnt *);
void rpc_release_client(struct rpc_clnt *);
-void rpc_getport(struct rpc_task *, struct rpc_clnt *);
+void rpc_getport(struct rpc_task *);
int rpc_register(u32, u32, int, unsigned short, int *);
void rpc_call_setup(struct rpc_task *, struct rpc_message *, int);
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 445e554..2c4d6c8 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -105,6 +105,7 @@ struct rpc_xprt_ops {
void (*set_buffer_size)(struct rpc_xprt *xprt, size_t sndsize, size_t rcvsize);
int (*reserve_xprt)(struct rpc_task *task);
void (*release_xprt)(struct rpc_xprt *xprt, struct rpc_task *task);
+ void (*rpcbind)(struct rpc_task *task);
void (*set_port)(struct rpc_xprt *xprt, unsigned short port);
void (*connect)(struct rpc_task *task);
void * (*buf_alloc)(struct rpc_task *task, size_t size);
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 87008ff..bff350e 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -777,7 +777,6 @@ call_encode(struct rpc_task *task)
static void
call_bind(struct rpc_task *task)
{
- struct rpc_clnt *clnt = task->tk_client;
struct rpc_xprt *xprt = task->tk_xprt;
dprintk("RPC: %4d call_bind (status %d)\n",
@@ -787,7 +786,7 @@ call_bind(struct rpc_task *task)
if (!xprt_bound(xprt)) {
task->tk_action = call_bind_status;
task->tk_timeout = xprt->bind_timeout;
- rpc_getport(task, clnt);
+ xprt->ops->rpcbind(task);
}
}
diff --git a/net/sunrpc/pmap_clnt.c b/net/sunrpc/pmap_clnt.c
index 1f80ad1..66f18d0 100644
--- a/net/sunrpc/pmap_clnt.c
+++ b/net/sunrpc/pmap_clnt.c
@@ -81,13 +81,13 @@ static inline void pmap_wake_portmap_wai
/**
* rpc_getport - obtain the port for a given RPC service on a given host
* @task: task that is waiting for portmapper request
- * @clnt: controlling rpc_clnt
*
* This one can be called for an ongoing RPC request, and can be used in
* an async (rpciod) context.
*/
-void rpc_getport(struct rpc_task *task, struct rpc_clnt *clnt)
+void rpc_getport(struct rpc_task *task)
{
+ struct rpc_clnt *clnt = task->tk_client;
struct rpc_xprt *xprt = task->tk_xprt;
struct sockaddr_in *sap = &xprt->addr;
struct portmap_args *map;
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 43b59c2..159d591 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1264,6 +1264,7 @@ static struct rpc_xprt_ops xs_udp_ops =
.set_buffer_size = xs_udp_set_buffer_size,
.reserve_xprt = xprt_reserve_xprt_cong,
.release_xprt = xprt_release_xprt_cong,
+ .rpcbind = rpc_getport,
.set_port = xs_set_port,
.connect = xs_connect,
.buf_alloc = rpc_malloc,
@@ -1280,6 +1281,7 @@ static struct rpc_xprt_ops xs_udp_ops =
static struct rpc_xprt_ops xs_tcp_ops = {
.reserve_xprt = xprt_reserve_xprt,
.release_xprt = xs_tcp_release_xprt,
+ .rpcbind = rpc_getport,
.set_port = xs_set_port,
.connect = xs_connect,
.buf_alloc = rpc_malloc,
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Hide the details of how the RPC client stores remote peer addresses from
the Network Lock Manager.
Test plan:
Destructive testing (unplugging the network temporarily). Connectathon
with UDP and TCP.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/lockd/clntproc.c | 10 +++++-----
1 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index 89ba0df..50dbb67 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -151,11 +151,13 @@ static void nlmclnt_release_lockargs(str
int
nlmclnt_proc(struct inode *inode, int cmd, struct file_lock *fl)
{
+ struct rpc_clnt *client = NFS_CLIENT(inode);
+ struct sockaddr_in addr;
struct nlm_host *host;
struct nlm_rqst *call;
sigset_t oldset;
unsigned long flags;
- int status, proto, vers;
+ int status, vers;
vers = (NFS_PROTO(inode)->version == 3) ? 4 : 1;
if (NFS_PROTO(inode)->version > 3) {
@@ -163,10 +165,8 @@ nlmclnt_proc(struct inode *inode, int cm
return -ENOLCK;
}
- /* Retrieve transport protocol from NFS client */
- proto = NFS_CLIENT(inode)->cl_xprt->prot;
-
- host = nlmclnt_lookup_host(NFS_ADDR(inode), proto, vers);
+ rpc_peeraddr(client, (struct sockaddr *) &addr, sizeof(addr));
+ host = nlmclnt_lookup_host(&addr, client->cl_xprt->prot, vers);
if (host == NULL)
return -ENOLCK;
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add comments for external functions, use modern function definition style,
and fix up dprintk formatting.
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/pmap_clnt.c | 70 +++++++++++++++++++++++++++++-------------------
1 files changed, 42 insertions(+), 28 deletions(-)
diff --git a/net/sunrpc/pmap_clnt.c b/net/sunrpc/pmap_clnt.c
index 1dad361..1f80ad1 100644
--- a/net/sunrpc/pmap_clnt.c
+++ b/net/sunrpc/pmap_clnt.c
@@ -1,7 +1,9 @@
/*
- * linux/net/sunrpc/pmap.c
+ * linux/net/sunrpc/pmap_clnt.c
*
- * Portmapper client.
+ * In-kernel RPC portmapper client.
+ *
+ * Portmapper supports version 2 of the rpcbind protocol (RFC 1833).
*
* Copyright (C) 1996, Olaf Kirch <[email protected]>
*/
@@ -76,12 +78,15 @@ static inline void pmap_wake_portmap_wai
rpc_wake_up(&xprt->binding);
}
-/*
- * Obtain the port for a given RPC service on a given host. This one can
- * be called for an ongoing RPC request.
+/**
+ * rpc_getport - obtain the port for a given RPC service on a given host
+ * @task: task that is waiting for portmapper request
+ * @clnt: controlling rpc_clnt
+ *
+ * This one can be called for an ongoing RPC request, and can be used in
+ * an async (rpciod) context.
*/
-void
-rpc_getport(struct rpc_task *task, struct rpc_clnt *clnt)
+void rpc_getport(struct rpc_task *task, struct rpc_clnt *clnt)
{
struct rpc_xprt *xprt = task->tk_xprt;
struct sockaddr_in *sap = &xprt->addr;
@@ -144,8 +149,16 @@ bailout_nofree:
}
#ifdef CONFIG_ROOT_NFS
-int
-rpc_getport_external(struct sockaddr_in *sin, __u32 prog, __u32 vers, int prot)
+/**
+ * rpc_getport_external - obtain the port for a given RPC service on a given host
+ * @sin: address of remote peer
+ * @prog: RPC program number to bind
+ * @vers: RPC version number to bind
+ * @prot: transport protocol to use to make this request
+ *
+ * This one is called from outside the RPC client in a synchronous task context.
+ */
+int rpc_getport_external(struct sockaddr_in *sin, __u32 prog, __u32 vers, int prot)
{
struct portmap_args map = {
.pm_prog = prog,
@@ -162,7 +175,7 @@ rpc_getport_external(struct sockaddr_in
char hostname[32];
int status;
- dprintk("RPC: rpc_getport_external(%u.%u.%u.%u, %d, %d, %d)\n",
+ dprintk("RPC: rpc_getport_external(%u.%u.%u.%u, %u, %u, %d)\n",
NIPQUAD(sin->sin_addr.s_addr), prog, vers, prot);
sprintf(hostname, "%u.%u.%u.%u", NIPQUAD(sin->sin_addr.s_addr));
@@ -182,8 +195,10 @@ rpc_getport_external(struct sockaddr_in
}
#endif
-static void
-pmap_getport_done(struct rpc_task *child, void *data)
+/*
+ * Portmapper child task invokes this callback via tk_exit.
+ */
+static void pmap_getport_done(struct rpc_task *child, void *data)
{
struct portmap_args *map = data;
struct rpc_task *task = map->pm_task;
@@ -209,12 +224,17 @@ pmap_getport_done(struct rpc_task *child
pmap_wake_portmap_waiters(xprt);
}
-/*
- * Set or unset a port registration with the local portmapper.
+/**
+ * rpc_register - set or unset a port registration with the local portmapper
+ * @prog: RPC program number to bind
+ * @vers: RPC version number to bind
+ * @prot: transport protocol to use to make this request
+ * @port: port value to register
+ * @okay: result code
+ *
* port == 0 means unregister, port != 0 means register.
*/
-int
-rpc_register(u32 prog, u32 vers, int prot, unsigned short port, int *okay)
+int rpc_register(u32 prog, u32 vers, int prot, unsigned short port, int *okay)
{
struct sockaddr_in sin = {
.sin_family = AF_INET,
@@ -234,7 +254,7 @@ rpc_register(u32 prog, u32 vers, int pro
struct rpc_clnt *pmap_clnt;
int error = 0;
- dprintk("RPC: registering (%d, %d, %d, %d) with portmapper.\n",
+ dprintk("RPC: registering (%u, %u, %d, %u) with portmapper.\n",
prog, vers, prot, port);
pmap_clnt = pmap_create("localhost", &sin, IPPROTO_UDP, 1);
@@ -257,13 +277,11 @@ rpc_register(u32 prog, u32 vers, int pro
return error;
}
-static struct rpc_clnt *
-pmap_create(char *hostname, struct sockaddr_in *srvaddr, int proto, int privileged)
+static struct rpc_clnt *pmap_create(char *hostname, struct sockaddr_in *srvaddr, int proto, int privileged)
{
struct rpc_xprt *xprt;
struct rpc_clnt *clnt;
- /* printk("pmap: create xprt\n"); */
xprt = xprt_create_proto(proto, srvaddr, NULL);
if (IS_ERR(xprt))
return (struct rpc_clnt *)xprt;
@@ -271,7 +289,6 @@ pmap_create(char *hostname, struct socka
if (!privileged)
xprt->resvport = 0;
- /* printk("pmap: create clnt\n"); */
clnt = rpc_new_client(xprt, hostname,
&pmap_program, RPC_PMAP_VERSION,
RPC_AUTH_UNIX);
@@ -285,10 +302,9 @@ pmap_create(char *hostname, struct socka
/*
* XDR encode/decode functions for PMAP
*/
-static int
-xdr_encode_mapping(struct rpc_rqst *req, u32 *p, struct portmap_args *map)
+static int xdr_encode_mapping(struct rpc_rqst *req, u32 *p, struct portmap_args *map)
{
- dprintk("RPC: xdr_encode_mapping(%d, %d, %d, %d)\n",
+ dprintk("RPC: xdr_encode_mapping(%u, %u, %u, %u)\n",
map->pm_prog, map->pm_vers, map->pm_prot, map->pm_port);
*p++ = htonl(map->pm_prog);
*p++ = htonl(map->pm_vers);
@@ -299,15 +315,13 @@ xdr_encode_mapping(struct rpc_rqst *req,
return 0;
}
-static int
-xdr_decode_port(struct rpc_rqst *req, u32 *p, unsigned short *portp)
+static int xdr_decode_port(struct rpc_rqst *req, u32 *p, unsigned short *portp)
{
*portp = (unsigned short) ntohl(*p++);
return 0;
}
-static int
-xdr_decode_bool(struct rpc_rqst *req, u32 *p, unsigned int *boolp)
+static int xdr_decode_bool(struct rpc_rqst *req, u32 *p, unsigned int *boolp)
{
*boolp = (unsigned int) ntohl(*p++);
return 0;
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
The previous patches removed the last user of RPC child tasks, so we can
remove support for child tasks from net/sunrpc/sched.c now.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/sched.h | 5 ---
net/sunrpc/sched.c | 82 ------------------------------------------
2 files changed, 0 insertions(+), 87 deletions(-)
diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
index 82a91bb..f399c13 100644
--- a/include/linux/sunrpc/sched.h
+++ b/include/linux/sunrpc/sched.h
@@ -127,7 +127,6 @@ struct rpc_call_ops {
*/
#define RPC_TASK_ASYNC 0x0001 /* is an async task */
#define RPC_TASK_SWAPPER 0x0002 /* is swapping in/out */
-#define RPC_TASK_CHILD 0x0008 /* is child of other task */
#define RPC_CALL_MAJORSEEN 0x0020 /* major timeout seen */
#define RPC_TASK_ROOTCREDS 0x0040 /* force root creds */
#define RPC_TASK_DYNAMIC 0x0080 /* task was kmalloc'ed */
@@ -136,7 +135,6 @@ #define RPC_TASK_SOFT 0x0200 /* Use so
#define RPC_TASK_NOINTR 0x0400 /* uninterruptible task */
#define RPC_IS_ASYNC(t) ((t)->tk_flags & RPC_TASK_ASYNC)
-#define RPC_IS_CHILD(t) ((t)->tk_flags & RPC_TASK_CHILD)
#define RPC_IS_SWAPPER(t) ((t)->tk_flags & RPC_TASK_SWAPPER)
#define RPC_DO_ROOTOVERRIDE(t) ((t)->tk_flags & RPC_TASK_ROOTCREDS)
#define RPC_ASSASSINATED(t) ((t)->tk_flags & RPC_TASK_KILLED)
@@ -253,7 +251,6 @@ struct rpc_task *rpc_new_task(struct rpc
const struct rpc_call_ops *ops, void *data);
struct rpc_task *rpc_run_task(struct rpc_clnt *clnt, int flags,
const struct rpc_call_ops *ops, void *data);
-struct rpc_task *rpc_new_child(struct rpc_clnt *, struct rpc_task *parent);
void rpc_init_task(struct rpc_task *task, struct rpc_clnt *clnt,
int flags, const struct rpc_call_ops *ops,
void *data);
@@ -261,8 +258,6 @@ void rpc_release_task(struct rpc_task *
void rpc_exit_task(struct rpc_task *);
void rpc_killall_tasks(struct rpc_clnt *);
int rpc_execute(struct rpc_task *);
-void rpc_run_child(struct rpc_task *parent, struct rpc_task *child,
- rpc_action action);
void rpc_init_priority_wait_queue(struct rpc_wait_queue *, const char *);
void rpc_init_wait_queue(struct rpc_wait_queue *, const char *);
void rpc_sleep_on(struct rpc_wait_queue *, struct rpc_task *,
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 5c3eee7..015ffe4 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -45,12 +45,6 @@ static void rpciod_killall(void);
static void rpc_async_schedule(void *);
/*
- * RPC tasks that create another task (e.g. for contacting the portmapper)
- * will wait on this queue for their child's completion
- */
-static RPC_WAITQ(childq, "childq");
-
-/*
* RPC tasks sit here while waiting for conditions to improve.
*/
static RPC_WAITQ(delay_queue, "delayq");
@@ -324,16 +318,6 @@ static void rpc_make_runnable(struct rpc
}
/*
- * Place a newly initialized task on the workqueue.
- */
-static inline void
-rpc_schedule_run(struct rpc_task *task)
-{
- rpc_set_active(task);
- rpc_make_runnable(task);
-}
-
-/*
* Prepare for sleeping on a wait queue.
* By always appending tasks to the list we ensure FIFO behavior.
* NB: An RPC task will only receive interrupt-driven events as long
@@ -933,72 +917,6 @@ struct rpc_task *rpc_run_task(struct rpc
}
EXPORT_SYMBOL(rpc_run_task);
-/**
- * rpc_find_parent - find the parent of a child task.
- * @child: child task
- * @parent: parent task
- *
- * Checks that the parent task is still sleeping on the
- * queue 'childq'. If so returns a pointer to the parent.
- * Upon failure returns NULL.
- *
- * Caller must hold childq.lock
- */
-static inline struct rpc_task *rpc_find_parent(struct rpc_task *child, struct rpc_task *parent)
-{
- struct rpc_task *task;
- struct list_head *le;
-
- task_for_each(task, le, &childq.tasks[0])
- if (task == parent)
- return parent;
-
- return NULL;
-}
-
-static void rpc_child_exit(struct rpc_task *child, void *calldata)
-{
- struct rpc_task *parent;
-
- spin_lock_bh(&childq.lock);
- if ((parent = rpc_find_parent(child, calldata)) != NULL) {
- parent->tk_status = child->tk_status;
- __rpc_wake_up_task(parent);
- }
- spin_unlock_bh(&childq.lock);
-}
-
-static const struct rpc_call_ops rpc_child_ops = {
- .rpc_call_done = rpc_child_exit,
-};
-
-/*
- * Note: rpc_new_task releases the client after a failure.
- */
-struct rpc_task *
-rpc_new_child(struct rpc_clnt *clnt, struct rpc_task *parent)
-{
- struct rpc_task *task;
-
- task = rpc_new_task(clnt, RPC_TASK_ASYNC | RPC_TASK_CHILD, &rpc_child_ops, parent);
- if (!task)
- goto fail;
- return task;
-
-fail:
- parent->tk_status = -ENOMEM;
- return NULL;
-}
-
-void rpc_run_child(struct rpc_task *task, struct rpc_task *child, rpc_action func)
-{
- spin_lock_bh(&childq.lock);
- /* N.B. Is it possible for the child to have already finished? */
- __rpc_sleep_on(&childq, task, func, NULL);
- rpc_schedule_run(child);
- spin_unlock_bh(&childq.lock);
-}
-
/*
* Kill all tasks for the given client.
* XXX: kill their descendants as well?
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Provide an API for retrieving the remote peer address without allowing
direct access to the rpc_xprt struct.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/clnt.h | 1 +
net/sunrpc/clnt.c | 21 +++++++++++++++++++++
2 files changed, 22 insertions(+), 0 deletions(-)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index 2e68ac0..65196b0 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -123,6 +123,7 @@ void rpc_setbufsize(struct rpc_clnt *,
size_t rpc_max_payload(struct rpc_clnt *);
void rpc_force_rebind(struct rpc_clnt *);
int rpc_ping(struct rpc_clnt *clnt, int flags);
+size_t rpc_peeraddr(struct rpc_clnt *, struct sockaddr *, size_t);
/*
* Helper function for NFSroot support
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index bff350e..da377eb 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -536,6 +536,27 @@ rpc_call_setup(struct rpc_task *task, st
task->tk_action = rpc_exit_task;
}
+/**
+ * rpc_peeraddr - extract remote peer address from clnt's xprt
+ * @clnt: RPC client structure
+ * @buf: target buffer
+ * @size: length of target buffer
+ *
+ * Returns the number of bytes that are actually in the stored address.
+ */
+size_t rpc_peeraddr(struct rpc_clnt *clnt, struct sockaddr *buf, size_t bufsize)
+{
+ size_t bytes;
+ struct rpc_xprt *xprt = clnt->cl_xprt;
+
+ bytes = sizeof(xprt->addr);
+ if (bytes > bufsize)
+ bytes = bufsize;
+ memcpy(buf, &clnt->cl_xprt->addr, bytes);
+ return sizeof(xprt->addr);
+}
+EXPORT_SYMBOL(rpc_peeraddr);
+
void
rpc_setbufsize(struct rpc_clnt *clnt, unsigned int sndsize, unsigned int rcvsize)
{
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, Aug 09, 2006 at 10:58:51AM -0400, Chuck Lever wrote:
> Some hardware uses port 664 for its hardware-based IPMI listener. Teach
> the RPC client to avoid using that port by raising the default minimum port
> number to 665.
>
> Test plan:
> Find a mainboard known to use port 664 for IPMI; enable IPMI; mount NFS
> servers in a tight loop.
I think this should go into 2.6.18.
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Replace xprt_create_proto/rpc_create_client call in NFS server callback
functions to use new rpc_create() API.
Test plan:
NFSv4 delegation functionality tests.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/nfsd/nfs4callback.c | 66 ++++++++++++++++++++----------------------------
1 files changed, 27 insertions(+), 39 deletions(-)
diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 54b37b1..8583d99 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -375,16 +375,28 @@ nfsd4_probe_callback(struct nfs4_client
{
struct sockaddr_in addr;
struct nfs4_callback *cb = &clp->cl_callback;
- struct rpc_timeout timeparms;
- struct rpc_xprt * xprt;
+ struct rpc_timeout timeparms = {
+ .to_initval = (NFSD_LEASE_TIME/4) * HZ,
+ .to_retries = 5,
+ .to_maxval = (NFSD_LEASE_TIME/2) * HZ,
+ .to_exponential = 1,
+ };
struct rpc_program * program = &cb->cb_program;
- struct rpc_stat * stat = &cb->cb_stat;
- struct rpc_clnt * clnt;
+ struct rpc_create_args args = {
+ .protocol = IPPROTO_TCP,
+ .address = (struct sockaddr *)&addr,
+ .addrsize = sizeof(addr),
+ .timeout = &timeparms,
+ .servername = clp->cl_name.data,
+ .program = program,
+ .version = nfs_cb_version[1]->number,
+ .authflavor = RPC_AUTH_UNIX, /* XXX: need AUTH_GSS... */
+ .flags = (RPC_CLNT_CREATE_NOPING),
+ };
struct rpc_message msg = {
.rpc_proc = &nfs4_cb_procedures[NFSPROC4_CLNT_CB_NULL],
.rpc_argp = clp,
};
- char hostname[32];
int status;
if (atomic_read(&cb->cb_set))
@@ -396,51 +408,27 @@ nfsd4_probe_callback(struct nfs4_client
addr.sin_port = htons(cb->cb_port);
addr.sin_addr.s_addr = htonl(cb->cb_addr);
- /* Initialize timeout */
- timeparms.to_initval = (NFSD_LEASE_TIME/4) * HZ;
- timeparms.to_retries = 0;
- timeparms.to_maxval = (NFSD_LEASE_TIME/2) * HZ;
- timeparms.to_exponential = 1;
-
- /* Create RPC transport */
- xprt = xprt_create_proto(IPPROTO_TCP, &addr, &timeparms);
- if (IS_ERR(xprt)) {
- dprintk("NFSD: couldn't create callback transport!\n");
- goto out_err;
- }
-
/* Initialize rpc_program */
program->name = "nfs4_cb";
program->number = cb->cb_prog;
program->nrvers = ARRAY_SIZE(nfs_cb_version);
program->version = nfs_cb_version;
- program->stats = stat;
+ program->stats = &cb->cb_stat;
/* Initialize rpc_stat */
- memset(stat, 0, sizeof(struct rpc_stat));
- stat->program = program;
-
- /* Create RPC client
- *
- * XXX AUTH_UNIX only - need AUTH_GSS....
- */
- sprintf(hostname, "%u.%u.%u.%u", NIPQUAD(addr.sin_addr.s_addr));
- clnt = rpc_new_client(xprt, hostname, program, 1, RPC_AUTH_UNIX);
- if (IS_ERR(clnt)) {
+ memset(program->stats, 0, sizeof(cb->cb_stat));
+ program->stats->program = program;
+
+ /* Create RPC client */
+ cb->cb_client = rpc_create(&args);
+ if (!cb->cb_client) {
dprintk("NFSD: couldn't create callback client\n");
goto out_err;
}
- clnt->cl_intr = 0;
- clnt->cl_softrtry = 1;
/* Kick rpciod, put the call on the wire. */
-
- if (rpciod_up() != 0) {
- dprintk("nfsd: couldn't start rpciod for callbacks!\n");
+ if (rpciod_up() != 0)
goto out_clnt;
- }
-
- cb->cb_client = clnt;
/* the task holds a reference to the nfs4_client struct */
atomic_inc(&clp->cl_count);
@@ -448,7 +436,7 @@ nfsd4_probe_callback(struct nfs4_client
msg.rpc_cred = nfsd4_lookupcred(clp,0);
if (IS_ERR(msg.rpc_cred))
goto out_rpciod;
- status = rpc_call_async(clnt, &msg, RPC_TASK_ASYNC, &nfs4_cb_null_ops, NULL);
+ status = rpc_call_async(cb->cb_client, &msg, RPC_TASK_ASYNC, &nfs4_cb_null_ops, NULL);
put_rpccred(msg.rpc_cred);
if (status != 0) {
@@ -462,7 +450,7 @@ out_rpciod:
rpciod_down();
cb->cb_client = NULL;
out_clnt:
- rpc_shutdown_client(clnt);
+ rpc_shutdown_client(cb->cb_client);
out_err:
dprintk("NFSD: warning: no callback path to client %.*s\n",
(int)clp->cl_name.len, clp->cl_name.data);
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Replace xprt_create_proto/rpc_create_client calls in pmap_clnt.c with new
rpc_create() API.
Test plan:
Repeated runs of Connectathon locking suite. Check network trace for
proper PMAP calls and replies.
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/pmap_clnt.c | 30 ++++++++++++++----------------
1 files changed, 14 insertions(+), 16 deletions(-)
diff --git a/net/sunrpc/pmap_clnt.c b/net/sunrpc/pmap_clnt.c
index 689333f..0fc45eb 100644
--- a/net/sunrpc/pmap_clnt.c
+++ b/net/sunrpc/pmap_clnt.c
@@ -279,24 +279,22 @@ int rpc_register(u32 prog, u32 vers, int
static struct rpc_clnt *pmap_create(char *hostname, struct sockaddr_in *srvaddr, int proto, int privileged)
{
- struct rpc_xprt *xprt;
- struct rpc_clnt *clnt;
+ struct rpc_create_args args = {
+ .protocol = proto,
+ .address = (struct sockaddr *)srvaddr,
+ .addrsize = sizeof(*srvaddr),
+ .servername = hostname,
+ .program = &pmap_program,
+ .version = RPC_PMAP_VERSION,
+ .authflavor = RPC_AUTH_UNIX,
+ .flags = (RPC_CLNT_CREATE_ONESHOT |
+ RPC_CLNT_CREATE_NOPING),
+ };
- xprt = xprt_create_proto(proto, srvaddr, NULL);
- if (IS_ERR(xprt))
- return (struct rpc_clnt *)xprt;
- xprt->ops->set_port(xprt, RPC_PMAP_PORT);
+ srvaddr->sin_port = htons(RPC_PMAP_PORT);
if (!privileged)
- xprt->resvport = 0;
-
- clnt = rpc_new_client(xprt, hostname,
- &pmap_program, RPC_PMAP_VERSION,
- RPC_AUTH_UNIX);
- if (!IS_ERR(clnt)) {
- clnt->cl_softrtry = 1;
- clnt->cl_oneshot = 1;
- }
- return clnt;
+ args.flags |= RPC_CLNT_CREATE_NONPRIVPORT;
+ return rpc_create(&args);
}
/*
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Provide an API for formatting the remote peer address for printing without
exposing its internal structure. The address could be dynamic, so we
support a function call to get the address rather than reading it straight
out of a structure.
Test-plan:
Destructive testing (unplugging the network temporarily). Probably need
to rig a server where certain services aren't running, or that returns an
error for some typical operation.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/clnt.h | 1 +
net/sunrpc/clnt.c | 13 +++++++++++++
2 files changed, 14 insertions(+), 0 deletions(-)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index 65196b0..b7d47f0 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -124,6 +124,7 @@ size_t rpc_max_payload(struct rpc_clnt
void rpc_force_rebind(struct rpc_clnt *);
int rpc_ping(struct rpc_clnt *clnt, int flags);
size_t rpc_peeraddr(struct rpc_clnt *, struct sockaddr *, size_t);
+char * rpc_peeraddr2str(struct rpc_clnt *, enum rpc_display_format_t);
/*
* Helper function for NFSroot support
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index da377eb..8f21e47 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -557,6 +557,19 @@ size_t rpc_peeraddr(struct rpc_clnt *cln
}
EXPORT_SYMBOL(rpc_peeraddr);
+/**
+ * rpc_peeraddr2str - return remote peer address in printable format
+ * @clnt: RPC client structure
+ * @format: address format
+ *
+ */
+char *rpc_peeraddr2str(struct rpc_clnt *clnt, enum rpc_display_format_t format)
+{
+ struct rpc_xprt *xprt = clnt->cl_xprt;
+ return xprt->ops->print_addr(xprt, format);
+}
+EXPORT_SYMBOL(rpc_peeraddr2str);
+
void
rpc_setbufsize(struct rpc_clnt *clnt, unsigned int sndsize, unsigned int rcvsize)
{
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Hide the details of how the RPC client stores remote peer addresses from
the RPC portmapper.
Test plan:
Destructive testing (unplugging the network temporarily). Connectathon
with UDP and TCP. NFSv2/3 and NFSv4 mounting should be carefully checked.
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/pmap_clnt.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/pmap_clnt.c b/net/sunrpc/pmap_clnt.c
index 66f18d0..ff0b92c 100644
--- a/net/sunrpc/pmap_clnt.c
+++ b/net/sunrpc/pmap_clnt.c
@@ -89,7 +89,7 @@ void rpc_getport(struct rpc_task *task)
{
struct rpc_clnt *clnt = task->tk_client;
struct rpc_xprt *xprt = task->tk_xprt;
- struct sockaddr_in *sap = &xprt->addr;
+ struct sockaddr_in addr;
struct portmap_args *map;
struct rpc_clnt *pmap_clnt;
struct rpc_task *child;
@@ -124,7 +124,8 @@ void rpc_getport(struct rpc_task *task)
map->pm_port = 0;
map->pm_task = task;
- pmap_clnt = pmap_create(clnt->cl_server, sap, map->pm_prot, 0);
+ rpc_peeraddr(clnt, (struct sockaddr *) &addr, sizeof(addr));
+ pmap_clnt = pmap_create(clnt->cl_server, &addr, map->pm_prot, 0);
if (IS_ERR(pmap_clnt)) {
task->tk_status = PTR_ERR(pmap_clnt);
goto bailout;
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
IPv6 addresses are big (128 bytes). Now that no RPC client consumers treat
the addr field in rpc_xprt structs as an opaque, and access it only via the
API calls, we can safely widen the field in the rpc_xprt struct to
accomodate larger addresses.
Test plan:
Compile kernel with CONFIG_NFS enabled.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/xprt.h | 3 ++-
net/sunrpc/clnt.c | 2 +-
net/sunrpc/xprt.c | 3 ++-
net/sunrpc/xprtsock.c | 15 ++++++++++-----
4 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 299613b..2cbd689 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -134,7 +134,8 @@ struct rpc_xprt {
struct sock * inet; /* INET layer */
struct rpc_timeout timeout; /* timeout parms */
- struct sockaddr_in addr; /* server address */
+ struct sockaddr_storage addr; /* server address */
+ size_t addrlen; /* size of server address */
int prot; /* IP protocol */
unsigned long cong; /* current congestion */
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 8f21e47..742cb1e 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -553,7 +553,7 @@ size_t rpc_peeraddr(struct rpc_clnt *cln
if (bytes > bufsize)
bytes = bufsize;
memcpy(buf, &clnt->cl_xprt->addr, bytes);
- return sizeof(xprt->addr);
+ return xprt->addrlen;
}
EXPORT_SYMBOL(rpc_peeraddr);
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index e35444e..dcc0bd7 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -900,7 +900,8 @@ static struct rpc_xprt *xprt_setup(int p
if ((xprt = kzalloc(sizeof(struct rpc_xprt), GFP_KERNEL)) == NULL)
return ERR_PTR(-ENOMEM);
- xprt->addr = *ap;
+ memcpy(&xprt->addr, ap, sizeof(*ap));
+ xprt->addrlen = sizeof(*ap);
switch (proto) {
case IPPROTO_UDP:
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 692be74..ababfe9 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -335,7 +335,7 @@ static int xs_udp_send_request(struct rp
req->rq_xtime = jiffies;
status = xs_sendpages(xprt->sock, (struct sockaddr *) &xprt->addr,
- sizeof(xprt->addr), xdr, req->rq_bytes_sent);
+ xprt->addrlen, xdr, req->rq_bytes_sent);
dprintk("RPC: xs_udp_send_request(%u) = %d\n",
xdr->len - req->rq_bytes_sent, status);
@@ -1020,8 +1020,11 @@ static char *xs_print_peer_address(struc
*/
static void xs_set_port(struct rpc_xprt *xprt, unsigned short port)
{
+ struct sockaddr_in *sap = (struct sockaddr_in *) &xprt->addr;
+
dprintk("RPC: setting port for xprt %p to %u\n", xprt, port);
- xprt->addr.sin_port = htons(port);
+
+ sap->sin_port = htons(port);
if (port != 0)
xprt_set_bound(xprt);
}
@@ -1204,7 +1207,7 @@ static void xs_tcp_connect_worker(void *
xprt->stat.connect_count++;
xprt->stat.connect_start = jiffies;
status = sock->ops->connect(sock, (struct sockaddr *) &xprt->addr,
- sizeof(xprt->addr), O_NONBLOCK);
+ xprt->addrlen, O_NONBLOCK);
dprintk("RPC: %p connect status %d connected %d sock state %d\n",
xprt, -status, xprt_connected(xprt), sock->sk->sk_state);
if (status < 0) {
@@ -1354,6 +1357,7 @@ static struct rpc_xprt_ops xs_tcp_ops =
int xs_setup_udp(struct rpc_xprt *xprt, struct rpc_timeout *to)
{
size_t slot_table_size;
+ struct sockaddr_in *addr = (struct sockaddr_in *) &xprt->addr;
xprt->max_reqs = xprt_udp_slot_table_entries;
slot_table_size = xprt->max_reqs * sizeof(xprt->slot[0]);
@@ -1361,7 +1365,7 @@ int xs_setup_udp(struct rpc_xprt *xprt,
if (xprt->slot == NULL)
return -ENOMEM;
- if (ntohs(xprt->addr.sin_port) != 0)
+ if (ntohs(addr->sin_port != 0))
xprt_set_bound(xprt);
xprt->port = xs_get_random_port();
@@ -1400,6 +1404,7 @@ int xs_setup_udp(struct rpc_xprt *xprt,
int xs_setup_tcp(struct rpc_xprt *xprt, struct rpc_timeout *to)
{
size_t slot_table_size;
+ struct sockaddr_in *addr = (struct sockaddr_in *) &xprt->addr;
xprt->max_reqs = xprt_tcp_slot_table_entries;
slot_table_size = xprt->max_reqs * sizeof(xprt->slot[0]);
@@ -1407,7 +1412,7 @@ int xs_setup_tcp(struct rpc_xprt *xprt,
if (xprt->slot == NULL)
return -ENOMEM;
- if (ntohs(xprt->addr.sin_port) != 0)
+ if (ntohs(addr->sin_port) != 0)
xprt_set_bound(xprt);
xprt->port = xs_get_random_port();
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
In the early days of NFS, there was no duplicate reply cache on the server.
Thus retransmitted non-idempotent requests often found that the request had
already completed on the server. To avoid passing an unanticipated return
code to unsuspecting applications, NFS clients would often shunt error
codes that implied the request had been retried but already completed.
On modern NFS clients, it is safe to remove such checks.
Test plan:
None.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/nfs/dir.c | 8 ++------
1 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index e7ffb4d..428d963 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1472,14 +1472,10 @@ #endif
error = NFS_PROTO(dir)->symlink(dir, &dentry->d_name, &qsymname,
&attr, &sym_fh, &sym_attr);
nfs_end_data_update(dir);
- if (!error) {
+ if (!error)
error = nfs_instantiate(dentry, &sym_fh, &sym_attr);
- } else {
- if (error == -EEXIST)
- printk("nfs_proc_symlink: %s/%s already exists??\n",
- dentry->d_parent->d_name.name, dentry->d_name.name);
+ else
d_drop(dentry);
- }
unlock_kernel();
return error;
}
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
include/linux/sunrpc/clnt.h already includes include/linux/sunrpc/xprt.h.
We can remove xprt.h from source files that already include clnt.h.
Likewise include/linux/sunrpc/timer.h.
Test plan:
Compile kernel with CONFIG_NFS enabled.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/nfs/mount_clnt.c | 1 -
include/linux/nfs_xdr.h | 1 -
net/sunrpc/pmap_clnt.c | 1 -
net/sunrpc/sched.c | 1 -
net/sunrpc/timer.c | 2 --
5 files changed, 0 insertions(+), 6 deletions(-)
diff --git a/fs/nfs/mount_clnt.c b/fs/nfs/mount_clnt.c
index 445abb4..4127487 100644
--- a/fs/nfs/mount_clnt.c
+++ b/fs/nfs/mount_clnt.c
@@ -14,7 +14,6 @@ #include <linux/uio.h>
#include <linux/net.h>
#include <linux/in.h>
#include <linux/sunrpc/clnt.h>
-#include <linux/sunrpc/xprt.h>
#include <linux/sunrpc/sched.h>
#include <linux/nfs_fs.h>
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index 2d3fb64..0c1093c 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -1,7 +1,6 @@
#ifndef _LINUX_NFS_XDR_H
#define _LINUX_NFS_XDR_H
-#include <linux/sunrpc/xprt.h>
#include <linux/nfsacl.h>
/*
diff --git a/net/sunrpc/pmap_clnt.c b/net/sunrpc/pmap_clnt.c
index ff0b92c..689333f 100644
--- a/net/sunrpc/pmap_clnt.c
+++ b/net/sunrpc/pmap_clnt.c
@@ -15,7 +15,6 @@ #include <linux/errno.h>
#include <linux/uio.h>
#include <linux/in.h>
#include <linux/sunrpc/clnt.h>
-#include <linux/sunrpc/xprt.h>
#include <linux/sunrpc/sched.h>
#ifdef RPC_DEBUG
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 015ffe4..ecf3663 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -21,7 +21,6 @@ #include <linux/spinlock.h>
#include <linux/mutex.h>
#include <linux/sunrpc/clnt.h>
-#include <linux/sunrpc/xprt.h>
#ifdef RPC_DEBUG
#define RPCDBG_FACILITY RPCDBG_SCHED
diff --git a/net/sunrpc/timer.c b/net/sunrpc/timer.c
index bcbdf64..8142fdb 100644
--- a/net/sunrpc/timer.c
+++ b/net/sunrpc/timer.c
@@ -19,8 +19,6 @@ #include <linux/types.h>
#include <linux/unistd.h>
#include <linux/sunrpc/clnt.h>
-#include <linux/sunrpc/xprt.h>
-#include <linux/sunrpc/timer.h>
#define RPC_RTO_MAX (60*HZ)
#define RPC_RTO_INIT (HZ/5)
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Now that we have a copy of the symlink path in the page cache, we can pass
a struct page down to the XDR routines instead of a string buffer.
Test plan:
Connectathon, all NFS versions.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/nfs/dir.c | 8 +-------
fs/nfs/nfs2xdr.c | 21 ++++++++++++++++++---
fs/nfs/nfs3proc.c | 14 +++++++-------
fs/nfs/nfs3xdr.c | 7 +++++--
fs/nfs/nfs4proc.c | 12 +++++++-----
fs/nfs/nfs4xdr.c | 8 ++++----
fs/nfs/proc.c | 14 +++++++-------
include/linux/nfs_xdr.h | 17 ++++++++++-------
8 files changed, 59 insertions(+), 42 deletions(-)
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 815ec2b..64306e5 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1460,10 +1460,6 @@ static int nfs_symlink(struct inode *dir
char *kaddr;
struct iattr attr;
unsigned int pathlen = strlen(symname);
- struct qstr qsymname = {
- .name = symname,
- .len = pathlen,
- };
int error;
dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s)\n", dir->i_sb->s_id,
@@ -1489,10 +1485,8 @@ static int nfs_symlink(struct inode *dir
memset(kaddr + pathlen, 0, PAGE_SIZE - pathlen);
kunmap_atomic(kaddr, KM_USER0);
- /* XXX: eventually this will pass in {page, pathlen},
- * instead of qsymname; need XDR changes for that */
nfs_begin_data_update(dir);
- error = NFS_PROTO(dir)->symlink(dir, dentry, &qsymname, &attr);
+ error = NFS_PROTO(dir)->symlink(dir, dentry, page, pathlen, &attr);
nfs_end_data_update(dir);
if (error != 0) {
dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s) error %d\n",
diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
index 67391ee..b49501f 100644
--- a/fs/nfs/nfs2xdr.c
+++ b/fs/nfs/nfs2xdr.c
@@ -51,7 +51,7 @@ #define NFS_writeargs_sz (NFS_fhandle_sz
#define NFS_createargs_sz (NFS_diropargs_sz+NFS_sattr_sz)
#define NFS_renameargs_sz (NFS_diropargs_sz+NFS_diropargs_sz)
#define NFS_linkargs_sz (NFS_fhandle_sz+NFS_diropargs_sz)
-#define NFS_symlinkargs_sz (NFS_diropargs_sz+NFS_path_sz+NFS_sattr_sz)
+#define NFS_symlinkargs_sz (NFS_diropargs_sz+1+NFS_sattr_sz)
#define NFS_readdirargs_sz (NFS_fhandle_sz+2)
#define NFS_attrstat_sz (1+NFS_fattr_sz)
@@ -351,11 +351,26 @@ nfs_xdr_linkargs(struct rpc_rqst *req, u
static int
nfs_xdr_symlinkargs(struct rpc_rqst *req, u32 *p, struct nfs_symlinkargs *args)
{
+ struct xdr_buf *sndbuf = &req->rq_snd_buf;
+ size_t pad;
+
p = xdr_encode_fhandle(p, args->fromfh);
p = xdr_encode_array(p, args->fromname, args->fromlen);
- p = xdr_encode_array(p, args->topath, args->tolen);
+ *p++ = htonl(args->pathlen);
+ sndbuf->len = xdr_adjust_iovec(sndbuf->head, p);
+
+ xdr_encode_pages(sndbuf, args->pages, 0, args->pathlen);
+
+ /*
+ * xdr_encode_pages may have added a few bytes to ensure the
+ * pathname ends on a 4-byte boundary. Start encoding the
+ * attributes after the pad bytes.
+ */
+ pad = sndbuf->tail->iov_len;
+ if (pad > 0)
+ p++;
p = xdr_encode_sattr(p, args->sattr);
- req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+ sndbuf->len += xdr_adjust_iovec(sndbuf->tail, p) - pad;
return 0;
}
diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
index 15eac8d..44cf0a9 100644
--- a/fs/nfs/nfs3proc.c
+++ b/fs/nfs/nfs3proc.c
@@ -544,8 +544,8 @@ nfs3_proc_link(struct inode *inode, stru
}
static int
-nfs3_proc_symlink(struct inode *dir, struct dentry *dentry, struct qstr *path,
- struct iattr *sattr)
+nfs3_proc_symlink(struct inode *dir, struct dentry *dentry, struct page *page,
+ unsigned int len, struct iattr *sattr)
{
struct nfs_fh fhandle;
struct nfs_fattr fattr, dir_attr;
@@ -553,8 +553,8 @@ nfs3_proc_symlink(struct inode *dir, str
.fromfh = NFS_FH(dir),
.fromname = dentry->d_name.name,
.fromlen = dentry->d_name.len,
- .topath = path->name,
- .tolen = path->len,
+ .pages = &page,
+ .pathlen = len,
.sattr = sattr
};
struct nfs3_diropres res = {
@@ -569,11 +569,11 @@ nfs3_proc_symlink(struct inode *dir, str
};
int status;
- if (path->len > NFS3_MAXPATHLEN)
+ if (len > NFS3_MAXPATHLEN)
return -ENAMETOOLONG;
- dprintk("NFS call symlink %s -> %s\n", dentry->d_name.name,
- path->name);
+ dprintk("NFS call symlink %s\n", dentry->d_name.name);
+
nfs_fattr_init(&dir_attr);
nfs_fattr_init(&fattr);
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
index 0250269..16556fa 100644
--- a/fs/nfs/nfs3xdr.c
+++ b/fs/nfs/nfs3xdr.c
@@ -56,7 +56,7 @@ #define NFS3_readargs_sz (NFS3_fh_sz+3)
#define NFS3_writeargs_sz (NFS3_fh_sz+5)
#define NFS3_createargs_sz (NFS3_diropargs_sz+NFS3_sattr_sz)
#define NFS3_mkdirargs_sz (NFS3_diropargs_sz+NFS3_sattr_sz)
-#define NFS3_symlinkargs_sz (NFS3_diropargs_sz+NFS3_path_sz+NFS3_sattr_sz)
+#define NFS3_symlinkargs_sz (NFS3_diropargs_sz+1+NFS3_sattr_sz)
#define NFS3_mknodargs_sz (NFS3_diropargs_sz+2+NFS3_sattr_sz)
#define NFS3_renameargs_sz (NFS3_diropargs_sz+NFS3_diropargs_sz)
#define NFS3_linkargs_sz (NFS3_fh_sz+NFS3_diropargs_sz)
@@ -398,8 +398,11 @@ nfs3_xdr_symlinkargs(struct rpc_rqst *re
p = xdr_encode_fhandle(p, args->fromfh);
p = xdr_encode_array(p, args->fromname, args->fromlen);
p = xdr_encode_sattr(p, args->sattr);
- p = xdr_encode_array(p, args->topath, args->tolen);
+ *p++ = htonl(args->pathlen);
req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
+
+ /* Copy the page */
+ xdr_encode_pages(&req->rq_snd_buf, args->pages, 0, args->pathlen);
return 0;
}
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 370b5ab..05775e2 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2090,7 +2090,7 @@ static int nfs4_proc_link(struct inode *
}
static int _nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
- struct qstr *path, struct iattr *sattr)
+ struct page *page, unsigned int len, struct iattr *sattr)
{
struct nfs_server *server = NFS_SERVER(dir);
struct nfs_fh fhandle;
@@ -2116,10 +2116,11 @@ static int _nfs4_proc_symlink(struct ino
};
int status;
- if (path->len > NFS4_MAXPATHLEN)
+ if (len > NFS4_MAXPATHLEN)
return -ENAMETOOLONG;
- arg.u.symlink = path;
+ arg.u.symlink.pages = &page;
+ arg.u.symlink.len = len;
nfs_fattr_init(&fattr);
nfs_fattr_init(&dir_fattr);
@@ -2133,13 +2134,14 @@ static int _nfs4_proc_symlink(struct ino
}
static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
- struct qstr *path, struct iattr *sattr)
+ struct page *page, unsigned int len, struct iattr *sattr)
{
struct nfs4_exception exception = { };
int err;
do {
err = nfs4_handle_exception(NFS_SERVER(dir),
- _nfs4_proc_symlink(dir, dentry, path, sattr),
+ _nfs4_proc_symlink(dir, dentry, page,
+ len, sattr),
&exception);
} while (exception.retry);
return err;
diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
index 1750d99..26dd22c 100644
--- a/fs/nfs/nfs4xdr.c
+++ b/fs/nfs/nfs4xdr.c
@@ -128,7 +128,7 @@ #define encode_link_maxsz (op_encode_hdr
#define decode_link_maxsz (op_decode_hdr_maxsz + 5)
#define encode_symlink_maxsz (op_encode_hdr_maxsz + \
1 + nfs4_name_maxsz + \
- nfs4_path_maxsz + \
+ 1 + \
nfs4_fattr_maxsz)
#define decode_symlink_maxsz (op_decode_hdr_maxsz + 8)
#define encode_create_maxsz (op_encode_hdr_maxsz + \
@@ -673,9 +673,9 @@ static int encode_create(struct xdr_stre
switch (create->ftype) {
case NF4LNK:
- RESERVE_SPACE(4 + create->u.symlink->len);
- WRITE32(create->u.symlink->len);
- WRITEMEM(create->u.symlink->name, create->u.symlink->len);
+ RESERVE_SPACE(4);
+ WRITE32(create->u.symlink.len);
+ xdr_write_pages(xdr, create->u.symlink.pages, 0, create->u.symlink.len);
break;
case NF4BLK: case NF4CHR:
diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c
index 7512f71..a6ee598 100644
--- a/fs/nfs/proc.c
+++ b/fs/nfs/proc.c
@@ -425,8 +425,8 @@ nfs_proc_link(struct inode *inode, struc
}
static int
-nfs_proc_symlink(struct inode *dir, struct dentry *dentry, struct qstr *path,
- struct iattr *sattr)
+nfs_proc_symlink(struct inode *dir, struct dentry *dentry, struct page *page,
+ unsigned int len, struct iattr *sattr)
{
struct nfs_fh fhandle;
struct nfs_fattr fattr;
@@ -434,8 +434,8 @@ nfs_proc_symlink(struct inode *dir, stru
.fromfh = NFS_FH(dir),
.fromname = dentry->d_name.name,
.fromlen = dentry->d_name.len,
- .topath = path->name,
- .tolen = path->len,
+ .pages = &page,
+ .pathlen = len,
.sattr = sattr
};
struct rpc_message msg = {
@@ -444,11 +444,11 @@ nfs_proc_symlink(struct inode *dir, stru
};
int status;
- if (path->len > NFS2_MAXPATHLEN)
+ if (len > NFS2_MAXPATHLEN)
return -ENAMETOOLONG;
- dprintk("NFS call symlink %s -> %s\n", dentry->d_name.name,
- path->name);
+ dprintk("NFS call symlink %s\n", dentry->d_name.name);
+
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
nfs_mark_for_revalidate(dir);
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index cfabcd1..0ed4104 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -358,8 +358,8 @@ struct nfs_symlinkargs {
struct nfs_fh * fromfh;
const char * fromname;
unsigned int fromlen;
- const char * topath;
- unsigned int tolen;
+ struct page ** pages;
+ unsigned int pathlen;
struct iattr * sattr;
};
@@ -434,8 +434,8 @@ struct nfs3_symlinkargs {
struct nfs_fh * fromfh;
const char * fromname;
unsigned int fromlen;
- const char * topath;
- unsigned int tolen;
+ struct page ** pages;
+ unsigned int pathlen;
struct iattr * sattr;
};
@@ -533,7 +533,10 @@ struct nfs4_accessres {
struct nfs4_create_arg {
u32 ftype;
union {
- struct qstr * symlink; /* NF4LNK */
+ struct {
+ struct page ** pages;
+ unsigned int len;
+ } symlink; /* NF4LNK */
struct {
u32 specdata1;
u32 specdata2;
@@ -790,8 +793,8 @@ struct nfs_rpc_ops {
int (*rename) (struct inode *, struct qstr *,
struct inode *, struct qstr *);
int (*link) (struct inode *, struct inode *, struct qstr *);
- int (*symlink) (struct inode *, struct dentry *, struct qstr *,
- struct iattr *);
+ int (*symlink) (struct inode *, struct dentry *, struct page *,
+ unsigned int, struct iattr *);
int (*mkdir) (struct inode *, struct dentry *, struct iattr *);
int (*rmdir) (struct inode *, struct qstr *);
int (*readdir) (struct dentry *, struct rpc_cred *,
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Replace xprt_create_proto/rpc_create_client with new rpc_create()
interface in the Network Lock Manager.
Note that the semantics of NLM transports is now "hard" instead of "soft"
to provide a better guarantee that lock requests will get to the server.
Test plan:
Repeated runs of Connectathon locking suite. Check network trace to ensure
NLM requests are working correctly.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/lockd/host.c | 50 +++++++++++++++++++++++++++-----------------------
fs/lockd/mon.c | 41 +++++++++++++++++------------------------
2 files changed, 44 insertions(+), 47 deletions(-)
diff --git a/fs/lockd/host.c b/fs/lockd/host.c
index a516a01..703fb03 100644
--- a/fs/lockd/host.c
+++ b/fs/lockd/host.c
@@ -166,7 +166,6 @@ struct rpc_clnt *
nlm_bind_host(struct nlm_host *host)
{
struct rpc_clnt *clnt;
- struct rpc_xprt *xprt;
dprintk("lockd: nlm_bind_host(%08x)\n",
(unsigned)ntohl(host->h_addr.sin_addr.s_addr));
@@ -178,7 +177,6 @@ nlm_bind_host(struct nlm_host *host)
* RPC rebind is required
*/
if ((clnt = host->h_rpcclnt) != NULL) {
- xprt = clnt->cl_xprt;
if (time_after_eq(jiffies, host->h_nextrebind)) {
rpc_force_rebind(clnt);
host->h_nextrebind = jiffies + NLM_HOST_REBIND;
@@ -186,31 +184,37 @@ nlm_bind_host(struct nlm_host *host)
host->h_nextrebind - jiffies);
}
} else {
- xprt = xprt_create_proto(host->h_proto, &host->h_addr, NULL);
- if (IS_ERR(xprt))
- goto forgetit;
-
- xprt_set_timeout(&xprt->timeout, 5, nlmsvc_timeout);
- xprt->resvport = 1; /* NLM requires a reserved port */
-
- /* Existing NLM servers accept AUTH_UNIX only */
- clnt = rpc_new_client(xprt, host->h_name, &nlm_program,
- host->h_version, RPC_AUTH_UNIX);
- if (IS_ERR(clnt))
- goto forgetit;
- clnt->cl_autobind = 1; /* turn on pmap queries */
- clnt->cl_softrtry = 1; /* All queries are soft */
-
- host->h_rpcclnt = clnt;
+ unsigned long increment = nlmsvc_timeout * HZ;
+ struct rpc_timeout timeparms = {
+ .to_initval = increment,
+ .to_increment = increment,
+ .to_maxval = increment * 6UL,
+ .to_retries = 5U,
+ };
+ struct rpc_create_args args = {
+ .protocol = host->h_proto,
+ .address = (struct sockaddr *)&host->h_addr,
+ .addrsize = sizeof(host->h_addr),
+ .timeout = &timeparms,
+ .servername = host->h_name,
+ .program = &nlm_program,
+ .version = host->h_version,
+ .authflavor = RPC_AUTH_UNIX,
+ .flags = (RPC_CLNT_CREATE_HARDRTRY |
+ RPC_CLNT_CREATE_AUTOBIND),
+ };
+
+ clnt = rpc_create(&args);
+ if (!IS_ERR(clnt))
+ host->h_rpcclnt = clnt;
+ else {
+ printk("lockd: couldn't create RPC handle for %s\n", host->h_name);
+ clnt = NULL;
+ }
}
mutex_unlock(&host->h_mutex);
return clnt;
-
-forgetit:
- printk("lockd: couldn't create RPC handle for %s\n", host->h_name);
- mutex_unlock(&host->h_mutex);
- return NULL;
}
/*
diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c
index 3fc683f..5954dcb 100644
--- a/fs/lockd/mon.c
+++ b/fs/lockd/mon.c
@@ -109,30 +109,23 @@ nsm_unmonitor(struct nlm_host *host)
static struct rpc_clnt *
nsm_create(void)
{
- struct rpc_xprt *xprt;
- struct rpc_clnt *clnt;
- struct sockaddr_in sin;
-
- sin.sin_family = AF_INET;
- sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
- sin.sin_port = 0;
-
- xprt = xprt_create_proto(IPPROTO_UDP, &sin, NULL);
- if (IS_ERR(xprt))
- return (struct rpc_clnt *)xprt;
- xprt->resvport = 1; /* NSM requires a reserved port */
-
- clnt = rpc_create_client(xprt, "localhost",
- &nsm_program, SM_VERSION,
- RPC_AUTH_NULL);
- if (IS_ERR(clnt))
- goto out_err;
- clnt->cl_softrtry = 1;
- clnt->cl_oneshot = 1;
- return clnt;
-
-out_err:
- return clnt;
+ struct sockaddr_in sin = {
+ .sin_family = AF_INET,
+ .sin_addr.s_addr = htonl(INADDR_LOOPBACK),
+ .sin_port = 0,
+ };
+ struct rpc_create_args args = {
+ .protocol = IPPROTO_UDP,
+ .address = (struct sockaddr *)&sin,
+ .addrsize = sizeof(sin),
+ .servername = "localhost",
+ .program = &nsm_program,
+ .version = SM_VERSION,
+ .authflavor = RPC_AUTH_NULL,
+ .flags = (RPC_CLNT_CREATE_ONESHOT),
+ };
+
+ return rpc_create(&args);
}
/*
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Remove some unused macros related to accessing an RPC peer address
Test plan:
Compile kernel with CONFIG_NFS option enabled.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/lockd/host.c | 1 -
include/linux/nfs_fs.h | 1 -
include/linux/sunrpc/clnt.h | 3 ---
3 files changed, 0 insertions(+), 5 deletions(-)
diff --git a/fs/lockd/host.c b/fs/lockd/host.c
index 38b0e8a..a516a01 100644
--- a/fs/lockd/host.c
+++ b/fs/lockd/host.c
@@ -26,7 +26,6 @@ #define NLM_ADDRHASH(addr) (ntohl(addr)
#define NLM_HOST_REBIND (60 * HZ)
#define NLM_HOST_EXPIRE ((nrhosts > NLM_HOST_MAX)? 300 * HZ : 120 * HZ)
#define NLM_HOST_COLLECT ((nrhosts > NLM_HOST_MAX)? 120 * HZ : 60 * HZ)
-#define NLM_HOST_ADDR(sv) (&(sv)->s_nlmclnt->cl_xprt->addr)
static struct nlm_host * nlm_hosts[NLM_HOST_NRHASH];
static unsigned long next_gc;
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index 2474345..dd61af2 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -210,7 +210,6 @@ #define NFS_FH(inode) (&NFS_I(inode)->
#define NFS_SERVER(inode) (NFS_SB(inode->i_sb))
#define NFS_CLIENT(inode) (NFS_SERVER(inode)->client)
#define NFS_PROTO(inode) (NFS_SERVER(inode)->rpc_ops)
-#define NFS_ADDR(inode) (RPC_PEERADDR(NFS_CLIENT(inode)))
#define NFS_COOKIEVERF(inode) (NFS_I(inode)->cookieverf)
#define NFS_READTIME(inode) (NFS_I(inode)->read_cache_jiffies)
#define NFS_CHANGE_ATTR(inode) (NFS_I(inode)->change_attr)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index b7d47f0..a26d695 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -89,9 +89,6 @@ struct rpc_procinfo {
char * p_name; /* name of procedure */
};
-#define RPC_CONGESTED(clnt) (RPCXPRT_CONGESTED((clnt)->cl_xprt))
-#define RPC_PEERADDR(clnt) (&(clnt)->cl_xprt->addr)
-
#ifdef __KERNEL__
struct rpc_clnt *rpc_create_client(struct rpc_xprt *xprt, char *servname,
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Currently the NFS client does not cache symlinks it creates. They get
cached only when the NFS client reads them back from the server.
Copy the symlink into the page cache before sending it.
Test plan:
Connectathon, all NFS versions.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/nfs/dir.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++------------
1 files changed, 68 insertions(+), 18 deletions(-)
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index ff4e852..815ec2b 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -30,6 +30,7 @@ #include <linux/nfs_fs.h>
#include <linux/nfs_mount.h>
#include <linux/pagemap.h>
#include <linux/smp_lock.h>
+#include <linux/pagevec.h>
#include <linux/namei.h>
#include "nfs4_fs.h"
@@ -1437,39 +1438,88 @@ static int nfs_unlink(struct inode *dir,
return error;
}
-static int
-nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname)
+/*
+ * To create a symbolic link, most file systems instantiate a new inode,
+ * add a page to it containing the path, then write it out to the disk
+ * using prepare_write/commit_write.
+ *
+ * Unfortunately the NFS client can't create the in-core inode first
+ * because it needs a file handle to create an in-core inode (see
+ * fs/nfs/inode.c:nfs_fhget). We only have a file handle *after* the
+ * symlink request has completed on the server.
+ *
+ * So instead we allocate a raw page, copy the symname into it, then do
+ * the SYMLINK request with the page as the buffer. If it succeeds, we
+ * now have a new file handle and can instantiate an in-core NFS inode
+ * and move the raw page into its mapping.
+ */
+static int nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname)
{
+ struct pagevec lru_pvec;
+ struct page *page;
+ char *kaddr;
struct iattr attr;
- struct qstr qsymname;
+ unsigned int pathlen = strlen(symname);
+ struct qstr qsymname = {
+ .name = symname,
+ .len = pathlen,
+ };
int error;
dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s)\n", dir->i_sb->s_id,
dir->i_ino, dentry->d_name.name, symname);
-#ifdef NFS_PARANOIA
-if (dentry->d_inode)
-printk("nfs_proc_symlink: %s/%s not negative!\n",
-dentry->d_parent->d_name.name, dentry->d_name.name);
-#endif
- /*
- * Fill in the sattr for the call.
- * Note: SunOS 4.1.2 crashes if the mode isn't initialized!
- */
- attr.ia_valid = ATTR_MODE;
- attr.ia_mode = S_IFLNK | S_IRWXUGO;
+ if (pathlen > PAGE_SIZE)
+ return -ENAMETOOLONG;
- qsymname.name = symname;
- qsymname.len = strlen(symname);
+ attr.ia_mode = S_IFLNK | S_IRWXUGO;
+ attr.ia_valid = ATTR_MODE;
lock_kernel();
+
+ page = alloc_page(GFP_KERNEL);
+ if (!page) {
+ unlock_kernel();
+ return -ENOMEM;
+ }
+
+ kaddr = kmap_atomic(page, KM_USER0);
+ memcpy(kaddr, symname, pathlen);
+ if (pathlen < PAGE_SIZE)
+ memset(kaddr + pathlen, 0, PAGE_SIZE - pathlen);
+ kunmap_atomic(kaddr, KM_USER0);
+
+ /* XXX: eventually this will pass in {page, pathlen},
+ * instead of qsymname; need XDR changes for that */
nfs_begin_data_update(dir);
error = NFS_PROTO(dir)->symlink(dir, dentry, &qsymname, &attr);
nfs_end_data_update(dir);
- if (!error)
+ if (error != 0) {
+ dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s) error %d\n",
+ dir->i_sb->s_id, dir->i_ino,
+ dentry->d_name.name, symname, error);
d_drop(dentry);
+ __free_page(page);
+ unlock_kernel();
+ return error;
+ }
+
+ /*
+ * No big deal if we can't add this page to the page cache here.
+ * READLINK will get the missing page from the server if needed.
+ */
+ pagevec_init(&lru_pvec, 0);
+ if (!add_to_page_cache(page, dentry->d_inode->i_mapping, 0,
+ GFP_KERNEL)) {
+ if (!pagevec_add(&lru_pvec, page))
+ __pagevec_lru_add(&lru_pvec);
+ SetPageUptodate(page);
+ unlock_page(page);
+ } else
+ __free_page(page);
+
unlock_kernel();
- return error;
+ return 0;
}
static int
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
If the LOOKUP or GETATTR in nfs_instantiate fail, nfs_instantiate will do a
d_drop before returning. But some callers already do a d_drop in the case
of an error return. Make certain we do only one d_drop in all error paths.
This bug was introduced because over time, the symlink proc API diverged
slightly from the create/mkdir/mknod proc API. To prevent other bugs of
this type, change the symlink proc API to be more like create/mkdir/mknod
and move the nfs_instantiate call into the symlink proc routines so it is
used in exactly the same way for create, mkdir, mknod, and symlink.
Test plan:
Connectathon, all versions of NFS.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/nfs/dir.c | 16 ++++------------
fs/nfs/nfs3proc.c | 26 ++++++++++++++++----------
fs/nfs/nfs4proc.c | 31 ++++++++++++++++---------------
fs/nfs/proc.c | 29 +++++++++++++++++++++--------
include/linux/nfs_xdr.h | 5 ++---
5 files changed, 59 insertions(+), 48 deletions(-)
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 428d963..ff4e852 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1143,23 +1143,20 @@ int nfs_instantiate(struct dentry *dentr
struct inode *dir = dentry->d_parent->d_inode;
error = NFS_PROTO(dir)->lookup(dir, &dentry->d_name, fhandle, fattr);
if (error)
- goto out_err;
+ return error;
}
if (!(fattr->valid & NFS_ATTR_FATTR)) {
struct nfs_server *server = NFS_SB(dentry->d_sb);
error = server->rpc_ops->getattr(server, fhandle, fattr);
if (error < 0)
- goto out_err;
+ return error;
}
inode = nfs_fhget(dentry->d_sb, fhandle, fattr);
error = PTR_ERR(inode);
if (IS_ERR(inode))
- goto out_err;
+ return error;
d_instantiate(dentry, inode);
return 0;
-out_err:
- d_drop(dentry);
- return error;
}
/*
@@ -1444,8 +1441,6 @@ static int
nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname)
{
struct iattr attr;
- struct nfs_fattr sym_attr;
- struct nfs_fh sym_fh;
struct qstr qsymname;
int error;
@@ -1469,12 +1464,9 @@ #endif
lock_kernel();
nfs_begin_data_update(dir);
- error = NFS_PROTO(dir)->symlink(dir, &dentry->d_name, &qsymname,
- &attr, &sym_fh, &sym_attr);
+ error = NFS_PROTO(dir)->symlink(dir, dentry, &qsymname, &attr);
nfs_end_data_update(dir);
if (!error)
- error = nfs_instantiate(dentry, &sym_fh, &sym_attr);
- else
d_drop(dentry);
unlock_kernel();
return error;
diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
index 7143b1f..15eac8d 100644
--- a/fs/nfs/nfs3proc.c
+++ b/fs/nfs/nfs3proc.c
@@ -544,23 +544,23 @@ nfs3_proc_link(struct inode *inode, stru
}
static int
-nfs3_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
- struct iattr *sattr, struct nfs_fh *fhandle,
- struct nfs_fattr *fattr)
+nfs3_proc_symlink(struct inode *dir, struct dentry *dentry, struct qstr *path,
+ struct iattr *sattr)
{
- struct nfs_fattr dir_attr;
+ struct nfs_fh fhandle;
+ struct nfs_fattr fattr, dir_attr;
struct nfs3_symlinkargs arg = {
.fromfh = NFS_FH(dir),
- .fromname = name->name,
- .fromlen = name->len,
+ .fromname = dentry->d_name.name,
+ .fromlen = dentry->d_name.len,
.topath = path->name,
.tolen = path->len,
.sattr = sattr
};
struct nfs3_diropres res = {
.dir_attr = &dir_attr,
- .fh = fhandle,
- .fattr = fattr
+ .fh = &fhandle,
+ .fattr = &fattr
};
struct rpc_message msg = {
.rpc_proc = &nfs3_procedures[NFS3PROC_SYMLINK],
@@ -571,11 +571,17 @@ nfs3_proc_symlink(struct inode *dir, str
if (path->len > NFS3_MAXPATHLEN)
return -ENAMETOOLONG;
- dprintk("NFS call symlink %s -> %s\n", name->name, path->name);
+
+ dprintk("NFS call symlink %s -> %s\n", dentry->d_name.name,
+ path->name);
nfs_fattr_init(&dir_attr);
- nfs_fattr_init(fattr);
+ nfs_fattr_init(&fattr);
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
nfs_post_op_update_inode(dir, &dir_attr);
+ if (status != 0)
+ goto out;
+ status = nfs_instantiate(dentry, &fhandle, &fattr);
+out:
dprintk("NFS reply symlink: %d\n", status);
return status;
}
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index e6ee97f..370b5ab 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2089,24 +2089,24 @@ static int nfs4_proc_link(struct inode *
return err;
}
-static int _nfs4_proc_symlink(struct inode *dir, struct qstr *name,
- struct qstr *path, struct iattr *sattr, struct nfs_fh *fhandle,
- struct nfs_fattr *fattr)
+static int _nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
+ struct qstr *path, struct iattr *sattr)
{
struct nfs_server *server = NFS_SERVER(dir);
- struct nfs_fattr dir_fattr;
+ struct nfs_fh fhandle;
+ struct nfs_fattr fattr, dir_fattr;
struct nfs4_create_arg arg = {
.dir_fh = NFS_FH(dir),
.server = server,
- .name = name,
+ .name = &dentry->d_name,
.attrs = sattr,
.ftype = NF4LNK,
.bitmask = server->attr_bitmask,
};
struct nfs4_create_res res = {
.server = server,
- .fh = fhandle,
- .fattr = fattr,
+ .fh = &fhandle,
+ .fattr = &fattr,
.dir_fattr = &dir_fattr,
};
struct rpc_message msg = {
@@ -2118,27 +2118,28 @@ static int _nfs4_proc_symlink(struct ino
if (path->len > NFS4_MAXPATHLEN)
return -ENAMETOOLONG;
+
arg.u.symlink = path;
- nfs_fattr_init(fattr);
+ nfs_fattr_init(&fattr);
nfs_fattr_init(&dir_fattr);
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
- if (!status)
+ if (!status) {
update_changeattr(dir, &res.dir_cinfo);
- nfs_post_op_update_inode(dir, res.dir_fattr);
+ nfs_post_op_update_inode(dir, res.dir_fattr);
+ status = nfs_instantiate(dentry, &fhandle, &fattr);
+ }
return status;
}
-static int nfs4_proc_symlink(struct inode *dir, struct qstr *name,
- struct qstr *path, struct iattr *sattr, struct nfs_fh *fhandle,
- struct nfs_fattr *fattr)
+static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
+ struct qstr *path, struct iattr *sattr)
{
struct nfs4_exception exception = { };
int err;
do {
err = nfs4_handle_exception(NFS_SERVER(dir),
- _nfs4_proc_symlink(dir, name, path, sattr,
- fhandle, fattr),
+ _nfs4_proc_symlink(dir, dentry, path, sattr),
&exception);
} while (exception.retry);
return err;
diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c
index b3899ea..7512f71 100644
--- a/fs/nfs/proc.c
+++ b/fs/nfs/proc.c
@@ -425,14 +425,15 @@ nfs_proc_link(struct inode *inode, struc
}
static int
-nfs_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
- struct iattr *sattr, struct nfs_fh *fhandle,
- struct nfs_fattr *fattr)
+nfs_proc_symlink(struct inode *dir, struct dentry *dentry, struct qstr *path,
+ struct iattr *sattr)
{
+ struct nfs_fh fhandle;
+ struct nfs_fattr fattr;
struct nfs_symlinkargs arg = {
.fromfh = NFS_FH(dir),
- .fromname = name->name,
- .fromlen = name->len,
+ .fromname = dentry->d_name.name,
+ .fromlen = dentry->d_name.len,
.topath = path->name,
.tolen = path->len,
.sattr = sattr
@@ -445,11 +446,23 @@ nfs_proc_symlink(struct inode *dir, stru
if (path->len > NFS2_MAXPATHLEN)
return -ENAMETOOLONG;
- dprintk("NFS call symlink %s -> %s\n", name->name, path->name);
- nfs_fattr_init(fattr);
- fhandle->size = 0;
+
+ dprintk("NFS call symlink %s -> %s\n", dentry->d_name.name,
+ path->name);
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
nfs_mark_for_revalidate(dir);
+
+ /*
+ * V2 SYMLINK requests don't return any attributes. Setting the
+ * filehandle size to zero indicates to nfs_instantiate that it
+ * should fill in the data with a LOOKUP call on the wire.
+ */
+ if (status == 0) {
+ nfs_fattr_init(&fattr);
+ fhandle.size = 0;
+ status = nfs_instantiate(dentry, &fhandle, &fattr);
+ }
+
dprintk("NFS reply symlink: %d\n", status);
return status;
}
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index 0c1093c..cfabcd1 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -790,9 +790,8 @@ struct nfs_rpc_ops {
int (*rename) (struct inode *, struct qstr *,
struct inode *, struct qstr *);
int (*link) (struct inode *, struct inode *, struct qstr *);
- int (*symlink) (struct inode *, struct qstr *, struct qstr *,
- struct iattr *, struct nfs_fh *,
- struct nfs_fattr *);
+ int (*symlink) (struct inode *, struct dentry *, struct qstr *,
+ struct iattr *);
int (*mkdir) (struct inode *, struct dentry *, struct iattr *);
int (*rmdir) (struct inode *, struct qstr *);
int (*readdir) (struct dentry *, struct rpc_cred *,
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Prepare for more generic transport endpoint handling needed by transports
that might use different forms of addressing, such as IPv6.
Introduce a single function call to replace the two-call
xprt_create_proto/rpc_create_client API. Define a new rpc_create_args
structure that allows callers to pass in remote endpoint addresses of
varying length.
Test-plan:
Compile kernel with CONFIG_NFS enabled.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/clnt.h | 22 +++++++++++++
include/linux/sunrpc/xprt.h | 1 +
net/sunrpc/clnt.c | 61 +++++++++++++++++++++++++++++++++++
net/sunrpc/xprt.c | 75 +++++++++++++++++++++++++++++++++++++++++++
4 files changed, 159 insertions(+), 0 deletions(-)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index a26d695..7817ba8 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -97,6 +97,28 @@ struct rpc_clnt *rpc_create_client(struc
struct rpc_clnt *rpc_new_client(struct rpc_xprt *xprt, char *servname,
struct rpc_program *info,
u32 version, rpc_authflavor_t authflavor);
+
+struct rpc_create_args {
+ int protocol;
+ struct sockaddr *address;
+ size_t addrsize;
+ struct rpc_timeout *timeout;
+ char *servername;
+ struct rpc_program *program;
+ u32 version;
+ rpc_authflavor_t authflavor;
+ unsigned long flags;
+};
+
+/* Values for "flags" field */
+#define RPC_CLNT_CREATE_HARDRTRY (1UL << 0)
+#define RPC_CLNT_CREATE_INTR (1UL << 1)
+#define RPC_CLNT_CREATE_AUTOBIND (1UL << 2)
+#define RPC_CLNT_CREATE_ONESHOT (1UL << 3)
+#define RPC_CLNT_CREATE_NONPRIVPORT (1UL << 4)
+#define RPC_CLNT_CREATE_NOPING (1UL << 5)
+
+struct rpc_clnt *rpc_create(struct rpc_create_args *args);
struct rpc_clnt *rpc_bind_new_program(struct rpc_clnt *,
struct rpc_program *, int);
struct rpc_clnt *rpc_clone_client(struct rpc_clnt *);
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 2cbd689..ceaaaa0 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -237,6 +237,7 @@ void xprt_set_timeout(struct rpc_timeo
/*
* Generic internal transport functions
*/
+struct rpc_xprt * xprt_create_transport(int proto, struct sockaddr *addr, size_t size, struct rpc_timeout *toparms);
void xprt_connect(struct rpc_task *task);
void xprt_reserve(struct rpc_task *task);
int xprt_reserve_xprt(struct rpc_task *task);
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 742cb1e..9a1f63c 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -193,6 +193,67 @@ out_no_xprt:
return ERR_PTR(err);
}
+/*
+ * rpc_create - create an RPC client and transport with one call
+ * @args: rpc_clnt create argument structure
+ *
+ * Creates and initializes an RPC transport and an RPC client.
+ *
+ * It can ping the server in order to determine if it is up, and to see if
+ * it supports this program and version. RPC_CLNT_CREATE_NOPING disables
+ * this behavior so asynchronous tasks can also use rpc_create.
+ */
+struct rpc_clnt *rpc_create(struct rpc_create_args *args)
+{
+ struct rpc_xprt *xprt;
+ struct rpc_clnt *clnt;
+
+ xprt = xprt_create_transport(args->protocol, args->address,
+ args->addrsize, args->timeout);
+ if (IS_ERR(xprt))
+ return (struct rpc_clnt *)xprt;
+
+ /*
+ * By default, kernel RPC client connects from a reserved port.
+ * CAP_NET_BIND_SERVICE will not be set for unprivileged requesters,
+ * but it is always enabled for rpciod, which handles the connect
+ * operation.
+ */
+ xprt->resvport = 1;
+ if (args->flags & RPC_CLNT_CREATE_NONPRIVPORT)
+ xprt->resvport = 0;
+
+ dprintk("RPC: creating %s client for %s (xprt %p)\n",
+ args->program->name, args->servername, xprt);
+
+ clnt = rpc_new_client(xprt, args->servername, args->program,
+ args->version, args->authflavor);
+ if (IS_ERR(clnt))
+ return clnt;
+
+ if (!(args->flags & RPC_CLNT_CREATE_NOPING)) {
+ int err = rpc_ping(clnt, RPC_TASK_SOFT|RPC_TASK_NOINTR);
+ if (err != 0) {
+ rpc_shutdown_client(clnt);
+ return ERR_PTR(err);
+ }
+ }
+
+ clnt->cl_softrtry = 1;
+ if (args->flags & RPC_CLNT_CREATE_HARDRTRY)
+ clnt->cl_softrtry = 0;
+
+ if (args->flags & RPC_CLNT_CREATE_INTR)
+ clnt->cl_intr = 1;
+ if (args->flags & RPC_CLNT_CREATE_AUTOBIND)
+ clnt->cl_autobind = 1;
+ if (args->flags & RPC_CLNT_CREATE_ONESHOT)
+ clnt->cl_oneshot = 1;
+
+ return clnt;
+}
+EXPORT_SYMBOL(rpc_create);
+
/**
* Create an RPC client
* @xprt - pointer to xprt struct
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index dcc0bd7..a8eb2fb 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -891,6 +891,81 @@ void xprt_set_timeout(struct rpc_timeout
to->to_exponential = 0;
}
+/**
+ * xprt_create_transport - create an RPC transport
+ * @proto: requested transport protocol
+ * @ap: remote peer address
+ * @size: length of address
+ * @to: timeout parameters
+ *
+ */
+struct rpc_xprt *xprt_create_transport(int proto, struct sockaddr *ap, size_t size, struct rpc_timeout *to)
+{
+ int result;
+ struct rpc_xprt *xprt;
+ struct rpc_rqst *req;
+
+ if ((xprt = kzalloc(sizeof(struct rpc_xprt), GFP_KERNEL)) == NULL) {
+ dprintk("RPC: xprt_create_transport: no memory\n");
+ return ERR_PTR(-ENOMEM);
+ }
+ if (size <= sizeof(xprt->addr)) {
+ memcpy(&xprt->addr, ap, size);
+ xprt->addrlen = size;
+ } else {
+ kfree(xprt);
+ dprintk("RPC: xprt_create_transport: address too large\n");
+ return ERR_PTR(-EBADF);
+ }
+
+ switch (proto) {
+ case IPPROTO_UDP:
+ result = xs_setup_udp(xprt, to);
+ break;
+ case IPPROTO_TCP:
+ result = xs_setup_tcp(xprt, to);
+ break;
+ default:
+ printk(KERN_ERR "RPC: unrecognized transport protocol: %d\n",
+ proto);
+ return ERR_PTR(-EIO);
+ }
+ if (result) {
+ kfree(xprt);
+ dprintk("RPC: xprt_create_transport: failed, %d\n", result);
+ return ERR_PTR(result);
+ }
+
+ spin_lock_init(&xprt->transport_lock);
+ spin_lock_init(&xprt->reserve_lock);
+
+ INIT_LIST_HEAD(&xprt->free);
+ INIT_LIST_HEAD(&xprt->recv);
+ INIT_WORK(&xprt->task_cleanup, xprt_autoclose, xprt);
+ init_timer(&xprt->timer);
+ xprt->timer.function = xprt_init_autodisconnect;
+ xprt->timer.data = (unsigned long) xprt;
+ xprt->last_used = jiffies;
+ xprt->cwnd = RPC_INITCWND;
+
+ rpc_init_wait_queue(&xprt->binding, "xprt_binding");
+ rpc_init_wait_queue(&xprt->pending, "xprt_pending");
+ rpc_init_wait_queue(&xprt->sending, "xprt_sending");
+ rpc_init_wait_queue(&xprt->resend, "xprt_resend");
+ rpc_init_priority_wait_queue(&xprt->backlog, "xprt_backlog");
+
+ /* initialize free list */
+ for (req = &xprt->slot[xprt->max_reqs-1]; req >= &xprt->slot[0]; req--)
+ list_add(&req->rq_list, &xprt->free);
+
+ xprt_init_xid(xprt);
+
+ dprintk("RPC: created transport %p with %u slots\n", xprt,
+ xprt->max_reqs);
+
+ return xprt;
+}
+
static struct rpc_xprt *xprt_setup(int proto, struct sockaddr_in *ap, struct rpc_timeout *to)
{
int result;
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Hide the details of how the RPC client stores remote peer addresses from
the RPC pipefs implementation.
Test plan:
Connectathon with Kerberos 5 authentication.
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/rpc_pipe.c | 6 ++----
1 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c
index a3bd2db..ff6d9c2 100644
--- a/net/sunrpc/rpc_pipe.c
+++ b/net/sunrpc/rpc_pipe.c
@@ -327,10 +327,8 @@ rpc_show_info(struct seq_file *m, void *
seq_printf(m, "RPC server: %s\n", clnt->cl_server);
seq_printf(m, "service: %s (%d) version %d\n", clnt->cl_protname,
clnt->cl_prog, clnt->cl_vers);
- seq_printf(m, "address: %u.%u.%u.%u\n",
- NIPQUAD(clnt->cl_xprt->addr.sin_addr.s_addr));
- seq_printf(m, "protocol: %s\n",
- clnt->cl_xprt->prot == IPPROTO_UDP ? "udp" : "tcp");
+ seq_printf(m, "address: %s\n", rpc_peeraddr2str(clnt, RPC_DISPLAY_ADDR));
+ seq_printf(m, "protocol: %s\n", rpc_peeraddr2str(clnt, RPC_DISPLAY_PROTO));
return 0;
}
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Add a new method to the transport switch API to provide a way to convert
the opaque contents of xprt->addr to a human-readable string.
Test plan:
Compile kernel with CONFIG_NFS enabled.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/xprt.h | 11 +++++++
net/sunrpc/xprtsock.c | 72 ++++++++++++++++++++++++++++++++++++++-----
2 files changed, 75 insertions(+), 8 deletions(-)
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 2c4d6c8..299613b 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -51,6 +51,14 @@ struct rpc_timeout {
unsigned char to_exponential;
};
+enum rpc_display_format_t {
+ RPC_DISPLAY_ADDR = 0,
+ RPC_DISPLAY_PORT,
+ RPC_DISPLAY_PROTO,
+ RPC_DISPLAY_ALL,
+ RPC_DISPLAY_MAX,
+};
+
struct rpc_task;
struct rpc_xprt;
struct seq_file;
@@ -103,6 +111,7 @@ #define rq_slen rq_snd_buf.len
struct rpc_xprt_ops {
void (*set_buffer_size)(struct rpc_xprt *xprt, size_t sndsize, size_t rcvsize);
+ char * (*print_addr)(struct rpc_xprt *xprt, enum rpc_display_format_t format);
int (*reserve_xprt)(struct rpc_task *task);
void (*release_xprt)(struct rpc_xprt *xprt, struct rpc_task *task);
void (*rpcbind)(struct rpc_task *task);
@@ -207,6 +216,8 @@ struct rpc_xprt {
void (*old_data_ready)(struct sock *, int);
void (*old_state_change)(struct sock *);
void (*old_write_space)(struct sock *);
+
+ char * address_strings[RPC_DISPLAY_MAX];
};
#define XPRT_LAST_FRAG (1 << 0)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 159d591..692be74 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -125,6 +125,41 @@ static inline void xs_pktdump(char *msg,
}
#endif
+static void xs_format_peer_addresses(struct rpc_xprt *xprt)
+{
+ struct sockaddr_in *addr = (struct sockaddr_in *) &xprt->addr;
+ char *buf;
+
+ buf = kzalloc(20, GFP_KERNEL);
+ if (buf) {
+ snprintf(buf, 20, "%u.%u.%u.%u",
+ NIPQUAD(addr->sin_addr.s_addr));
+ }
+ xprt->address_strings[RPC_DISPLAY_ADDR] = buf;
+
+
+ buf = kzalloc(8, GFP_KERNEL);
+ if (buf) {
+ snprintf(buf, 8, "%u",
+ ntohs(addr->sin_port));
+ }
+ xprt->address_strings[RPC_DISPLAY_PORT] = buf;
+
+ if (xprt->prot == IPPROTO_UDP)
+ xprt->address_strings[RPC_DISPLAY_PROTO] = "udp";
+ else
+ xprt->address_strings[RPC_DISPLAY_PROTO] = "tcp";
+
+ buf = kzalloc(48, GFP_KERNEL);
+ if (buf) {
+ snprintf(buf, 48, "addr=%u.%u.%u.%u port=%u proto=%s",
+ NIPQUAD(addr->sin_addr.s_addr),
+ ntohs(addr->sin_port),
+ xprt->prot == IPPROTO_UDP ? "udp" : "tcp");
+ }
+ xprt->address_strings[RPC_DISPLAY_ALL] = buf;
+}
+
#define XS_SENDMSG_FLAGS (MSG_DONTWAIT | MSG_NOSIGNAL)
static inline int xs_send_head(struct socket *sock, struct sockaddr *addr, int addrlen, struct xdr_buf *xdr, unsigned int base, unsigned int len)
@@ -965,6 +1000,19 @@ static unsigned short xs_get_random_port
}
/**
+ * xs_print_peer_address - format an IPv4 address for printing
+ * @xprt: generic transport
+ * @format: flags field indicating which parts of the address to render
+ */
+static char *xs_print_peer_address(struct rpc_xprt *xprt, enum rpc_display_format_t format)
+{
+ if (xprt->address_strings[format] != NULL)
+ return xprt->address_strings[format];
+ else
+ return "unprintable";
+}
+
+/**
* xs_set_port - reset the port number in the remote endpoint address
* @xprt: generic transport
* @port: new port number
@@ -1021,8 +1069,6 @@ static void xs_udp_connect_worker(void *
if (xprt->shutdown || !xprt_bound(xprt))
goto out;
- dprintk("RPC: xs_udp_connect_worker for xprt %p\n", xprt);
-
/* Start by resetting any existing state */
xs_close(xprt);
@@ -1036,6 +1082,9 @@ static void xs_udp_connect_worker(void *
goto out;
}
+ dprintk("RPC: worker connecting xprt %p to address: %s\n",
+ xprt, xs_print_peer_address(xprt, RPC_DISPLAY_ALL));
+
if (!xprt->inet) {
struct sock *sk = sock->sk;
@@ -1104,8 +1153,6 @@ static void xs_tcp_connect_worker(void *
if (xprt->shutdown || !xprt_bound(xprt))
goto out;
- dprintk("RPC: xs_tcp_connect_worker for xprt %p\n", xprt);
-
if (!xprt->sock) {
/* start from scratch */
if ((err = sock_create_kern(PF_INET, SOCK_STREAM, IPPROTO_TCP, &sock)) < 0) {
@@ -1121,6 +1168,9 @@ static void xs_tcp_connect_worker(void *
/* "close" the socket, preserving the local port */
xs_tcp_reuse_connection(xprt);
+ dprintk("RPC: worker connecting xprt %p to address: %s\n",
+ xprt, xs_print_peer_address(xprt, RPC_DISPLAY_ALL));
+
if (!xprt->inet) {
struct sock *sk = sock->sk;
@@ -1262,6 +1312,7 @@ static void xs_tcp_print_stats(struct rp
static struct rpc_xprt_ops xs_udp_ops = {
.set_buffer_size = xs_udp_set_buffer_size,
+ .print_addr = xs_print_peer_address,
.reserve_xprt = xprt_reserve_xprt_cong,
.release_xprt = xprt_release_xprt_cong,
.rpcbind = rpc_getport,
@@ -1279,6 +1330,7 @@ static struct rpc_xprt_ops xs_udp_ops =
};
static struct rpc_xprt_ops xs_tcp_ops = {
+ .print_addr = xs_print_peer_address,
.reserve_xprt = xprt_reserve_xprt,
.release_xprt = xs_tcp_release_xprt,
.rpcbind = rpc_getport,
@@ -1303,8 +1355,6 @@ int xs_setup_udp(struct rpc_xprt *xprt,
{
size_t slot_table_size;
- dprintk("RPC: setting up udp-ipv4 transport...\n");
-
xprt->max_reqs = xprt_udp_slot_table_entries;
slot_table_size = xprt->max_reqs * sizeof(xprt->slot[0]);
xprt->slot = kzalloc(slot_table_size, GFP_KERNEL);
@@ -1334,6 +1384,10 @@ int xs_setup_udp(struct rpc_xprt *xprt,
else
xprt_set_timeout(&xprt->timeout, 5, 5 * HZ);
+ xs_format_peer_addresses(xprt);
+ dprintk("RPC: set up transport to address %s\n",
+ xs_print_peer_address(xprt, RPC_DISPLAY_ALL));
+
return 0;
}
@@ -1347,8 +1401,6 @@ int xs_setup_tcp(struct rpc_xprt *xprt,
{
size_t slot_table_size;
- dprintk("RPC: setting up tcp-ipv4 transport...\n");
-
xprt->max_reqs = xprt_tcp_slot_table_entries;
slot_table_size = xprt->max_reqs * sizeof(xprt->slot[0]);
xprt->slot = kzalloc(slot_table_size, GFP_KERNEL);
@@ -1377,5 +1429,9 @@ int xs_setup_tcp(struct rpc_xprt *xprt,
else
xprt_set_timeout(&xprt->timeout, 2, 60 * HZ);
+ xs_format_peer_addresses(xprt);
+ dprintk("RPC: set up transport to address %s\n",
+ xs_print_peer_address(xprt, RPC_DISPLAY_ALL));
+
return 0;
}
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
The two function call API for creating a new RPC client is now obsolete.
Remove it.
Also, remove an unnecessary check to see whether the caller is capable of
using privileged network services. The kernel RPC client always uses a
privileged ephemeral port by default; callers are responsible for checking
the authority of users to make use of any RPC service, or for specifying
that a nonprivileged port is acceptable.
Test plan:
Repeated runs of Connectathon locking suite. Check network trace to ensure
correctness of NLM requests and replies.
Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/clnt.h | 7 ----
include/linux/sunrpc/xprt.h | 1 -
net/sunrpc/clnt.c | 42 +----------------------
net/sunrpc/sunrpc_syms.c | 3 --
net/sunrpc/xprt.c | 79 -------------------------------------------
net/sunrpc/xprtsock.c | 2 -
6 files changed, 1 insertions(+), 133 deletions(-)
diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
index 7817ba8..f6d1d64 100644
--- a/include/linux/sunrpc/clnt.h
+++ b/include/linux/sunrpc/clnt.h
@@ -91,13 +91,6 @@ struct rpc_procinfo {
#ifdef __KERNEL__
-struct rpc_clnt *rpc_create_client(struct rpc_xprt *xprt, char *servname,
- struct rpc_program *info,
- u32 version, rpc_authflavor_t authflavor);
-struct rpc_clnt *rpc_new_client(struct rpc_xprt *xprt, char *servname,
- struct rpc_program *info,
- u32 version, rpc_authflavor_t authflavor);
-
struct rpc_create_args {
int protocol;
struct sockaddr *address;
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index ceaaaa0..68e93ad 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -231,7 +231,6 @@ #ifdef __KERNEL__
/*
* Transport operations used by ULPs
*/
-struct rpc_xprt * xprt_create_proto(int proto, struct sockaddr_in *addr, struct rpc_timeout *to);
void xprt_set_timeout(struct rpc_timeout *to, unsigned int retr, unsigned long incr);
/*
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index 9a1f63c..0a8a2c7 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -97,17 +97,7 @@ rpc_setup_pipedir(struct rpc_clnt *clnt,
}
}
-/*
- * Create an RPC client
- * FIXME: This should also take a flags argument (as in task->tk_flags).
- * It's called (among others) from pmap_create_client, which may in
- * turn be called by an async task. In this case, rpciod should not be
- * made to sleep too long.
- */
-struct rpc_clnt *
-rpc_new_client(struct rpc_xprt *xprt, char *servname,
- struct rpc_program *program, u32 vers,
- rpc_authflavor_t flavor)
+static struct rpc_clnt * rpc_new_client(struct rpc_xprt *xprt, char *servname, struct rpc_program *program, u32 vers, rpc_authflavor_t flavor)
{
struct rpc_version *version;
struct rpc_clnt *clnt = NULL;
@@ -254,36 +244,6 @@ struct rpc_clnt *rpc_create(struct rpc_c
}
EXPORT_SYMBOL(rpc_create);
-/**
- * Create an RPC client
- * @xprt - pointer to xprt struct
- * @servname - name of server
- * @info - rpc_program
- * @version - rpc_program version
- * @authflavor - rpc_auth flavour to use
- *
- * Creates an RPC client structure, then pings the server in order to
- * determine if it is up, and if it supports this program and version.
- *
- * This function should never be called by asynchronous tasks such as
- * the portmapper.
- */
-struct rpc_clnt *rpc_create_client(struct rpc_xprt *xprt, char *servname,
- struct rpc_program *info, u32 version, rpc_authflavor_t authflavor)
-{
- struct rpc_clnt *clnt;
- int err;
-
- clnt = rpc_new_client(xprt, servname, info, version, authflavor);
- if (IS_ERR(clnt))
- return clnt;
- err = rpc_ping(clnt, RPC_TASK_SOFT|RPC_TASK_NOINTR);
- if (err == 0)
- return clnt;
- rpc_shutdown_client(clnt);
- return ERR_PTR(err);
-}
-
/*
* This function clones the RPC client structure. It allows us to share the
* same transport while varying parameters such as the authentication
diff --git a/net/sunrpc/sunrpc_syms.c b/net/sunrpc/sunrpc_syms.c
index f38f939..26c0531 100644
--- a/net/sunrpc/sunrpc_syms.c
+++ b/net/sunrpc/sunrpc_syms.c
@@ -36,8 +36,6 @@ EXPORT_SYMBOL(rpc_wake_up_status);
EXPORT_SYMBOL(rpc_release_task);
/* RPC client functions */
-EXPORT_SYMBOL(rpc_create_client);
-EXPORT_SYMBOL(rpc_new_client);
EXPORT_SYMBOL(rpc_clone_client);
EXPORT_SYMBOL(rpc_bind_new_program);
EXPORT_SYMBOL(rpc_destroy_client);
@@ -57,7 +55,6 @@ EXPORT_SYMBOL(rpc_queue_upcall);
EXPORT_SYMBOL(rpc_mkpipe);
/* Client transport */
-EXPORT_SYMBOL(xprt_create_proto);
EXPORT_SYMBOL(xprt_set_timeout);
/* Client credential cache */
diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
index a8eb2fb..507a96a 100644
--- a/net/sunrpc/xprt.c
+++ b/net/sunrpc/xprt.c
@@ -966,85 +966,6 @@ struct rpc_xprt *xprt_create_transport(i
return xprt;
}
-static struct rpc_xprt *xprt_setup(int proto, struct sockaddr_in *ap, struct rpc_timeout *to)
-{
- int result;
- struct rpc_xprt *xprt;
- struct rpc_rqst *req;
-
- if ((xprt = kzalloc(sizeof(struct rpc_xprt), GFP_KERNEL)) == NULL)
- return ERR_PTR(-ENOMEM);
-
- memcpy(&xprt->addr, ap, sizeof(*ap));
- xprt->addrlen = sizeof(*ap);
-
- switch (proto) {
- case IPPROTO_UDP:
- result = xs_setup_udp(xprt, to);
- break;
- case IPPROTO_TCP:
- result = xs_setup_tcp(xprt, to);
- break;
- default:
- printk(KERN_ERR "RPC: unrecognized transport protocol: %d\n",
- proto);
- result = -EIO;
- break;
- }
- if (result) {
- kfree(xprt);
- return ERR_PTR(result);
- }
-
- spin_lock_init(&xprt->transport_lock);
- spin_lock_init(&xprt->reserve_lock);
-
- INIT_LIST_HEAD(&xprt->free);
- INIT_LIST_HEAD(&xprt->recv);
- INIT_WORK(&xprt->task_cleanup, xprt_autoclose, xprt);
- init_timer(&xprt->timer);
- xprt->timer.function = xprt_init_autodisconnect;
- xprt->timer.data = (unsigned long) xprt;
- xprt->last_used = jiffies;
- xprt->cwnd = RPC_INITCWND;
-
- rpc_init_wait_queue(&xprt->binding, "xprt_binding");
- rpc_init_wait_queue(&xprt->pending, "xprt_pending");
- rpc_init_wait_queue(&xprt->sending, "xprt_sending");
- rpc_init_wait_queue(&xprt->resend, "xprt_resend");
- rpc_init_priority_wait_queue(&xprt->backlog, "xprt_backlog");
-
- /* initialize free list */
- for (req = &xprt->slot[xprt->max_reqs-1]; req >= &xprt->slot[0]; req--)
- list_add(&req->rq_list, &xprt->free);
-
- xprt_init_xid(xprt);
-
- dprintk("RPC: created transport %p with %u slots\n", xprt,
- xprt->max_reqs);
-
- return xprt;
-}
-
-/**
- * xprt_create_proto - create an RPC client transport
- * @proto: requested transport protocol
- * @sap: remote peer's address
- * @to: timeout parameters for new transport
- *
- */
-struct rpc_xprt *xprt_create_proto(int proto, struct sockaddr_in *sap, struct rpc_timeout *to)
-{
- struct rpc_xprt *xprt;
-
- xprt = xprt_setup(proto, sap, to);
- if (IS_ERR(xprt))
- dprintk("RPC: xprt_create_proto failed\n");
- else
- dprintk("RPC: xprt_create_proto created xprt %p\n", xprt);
- return xprt;
-}
-
/**
* xprt_destroy - destroy an RPC transport, killing off all requests.
* @xprt: transport to destroy
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index ababfe9..795e959 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1371,7 +1371,6 @@ int xs_setup_udp(struct rpc_xprt *xprt,
xprt->prot = IPPROTO_UDP;
xprt->tsh_size = 0;
- xprt->resvport = capable(CAP_NET_BIND_SERVICE) ? 1 : 0;
/* XXX: header size can vary due to auth type, IPv6, etc. */
xprt->max_payload = (1U << 16) - (MAX_HEADER << 3);
@@ -1418,7 +1417,6 @@ int xs_setup_tcp(struct rpc_xprt *xprt,
xprt->prot = IPPROTO_TCP;
xprt->tsh_size = sizeof(rpc_fraghdr) / sizeof(u32);
- xprt->resvport = capable(CAP_NET_BIND_SERVICE) ? 1 : 0;
xprt->max_payload = RPC_MAX_FRAGMENT_SIZE;
INIT_WORK(&xprt->connect_worker, xs_tcp_connect_worker, xprt);
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, Aug 09, 2006 at 10:59:06AM -0400, Chuck Lever wrote:
> Provide an API for retrieving the remote peer address without allowing
> direct access to the rpc_xprt struct.
>
> Test-plan:
> Compile kernel with CONFIG_NFS enabled.
>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> include/linux/sunrpc/clnt.h | 1 +
> net/sunrpc/clnt.c | 21 +++++++++++++++++++++
> 2 files changed, 22 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
> index 2e68ac0..65196b0 100644
> --- a/include/linux/sunrpc/clnt.h
> +++ b/include/linux/sunrpc/clnt.h
> @@ -123,6 +123,7 @@ void rpc_setbufsize(struct rpc_clnt *,
> size_t rpc_max_payload(struct rpc_clnt *);
> void rpc_force_rebind(struct rpc_clnt *);
> int rpc_ping(struct rpc_clnt *clnt, int flags);
> +size_t rpc_peeraddr(struct rpc_clnt *, struct sockaddr *, size_t);
>
> /*
> * Helper function for NFSroot support
> diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
> index bff350e..da377eb 100644
> --- a/net/sunrpc/clnt.c
> +++ b/net/sunrpc/clnt.c
> @@ -536,6 +536,27 @@ rpc_call_setup(struct rpc_task *task, st
> task->tk_action = rpc_exit_task;
> }
>
> +/**
> + * rpc_peeraddr - extract remote peer address from clnt's xprt
> + * @clnt: RPC client structure
> + * @buf: target buffer
> + * @size: length of target buffer
> + *
> + * Returns the number of bytes that are actually in the stored address.
> + */
> +size_t rpc_peeraddr(struct rpc_clnt *clnt, struct sockaddr *buf, size_t bufsize)
> +{
> + size_t bytes;
> + struct rpc_xprt *xprt = clnt->cl_xprt;
> +
> + bytes = sizeof(xprt->addr);
> + if (bytes > bufsize)
> + bytes = bufsize;
> + memcpy(buf, &clnt->cl_xprt->addr, bytes);
> + return sizeof(xprt->addr);
> +}
> +EXPORT_SYMBOL(rpc_peeraddr);
Shouldn't all these be _GPL exports? The transport switch is something
internal to the NFS client that shouldn't be seen as public at all and
could be changed at any time.
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Convert NFS client mount logic to use rpc_create() instead of the old
xprt_create_proto/rpc_create_client API.
Test plan:
Mount stress tests.
Signed-off-by: Chuck Lever <[email protected]>
---
fs/nfs/mount_clnt.c | 29 +++++++++-----------
fs/nfs/super.c | 74 +++++++++++++++++++++++----------------------------
2 files changed, 46 insertions(+), 57 deletions(-)
diff --git a/fs/nfs/mount_clnt.c b/fs/nfs/mount_clnt.c
index 4127487..ef8a3a0 100644
--- a/fs/nfs/mount_clnt.c
+++ b/fs/nfs/mount_clnt.c
@@ -76,22 +76,19 @@ static struct rpc_clnt *
mnt_create(char *hostname, struct sockaddr_in *srvaddr, int version,
int protocol)
{
- struct rpc_xprt *xprt;
- struct rpc_clnt *clnt;
-
- xprt = xprt_create_proto(protocol, srvaddr, NULL);
- if (IS_ERR(xprt))
- return (struct rpc_clnt *)xprt;
-
- clnt = rpc_create_client(xprt, hostname,
- &mnt_program, version,
- RPC_AUTH_UNIX);
- if (!IS_ERR(clnt)) {
- clnt->cl_softrtry = 1;
- clnt->cl_oneshot = 1;
- clnt->cl_intr = 1;
- }
- return clnt;
+ struct rpc_create_args args = {
+ .protocol = protocol,
+ .address = (struct sockaddr *)srvaddr,
+ .addrsize = sizeof(*srvaddr),
+ .servername = hostname,
+ .program = &mnt_program,
+ .version = version,
+ .authflavor = RPC_AUTH_UNIX,
+ .flags = (RPC_CLNT_CREATE_ONESHOT |
+ RPC_CLNT_INTR),
+ };
+
+ return rpc_create(&args);
}
/*
diff --git a/fs/nfs/super.c b/fs/nfs/super.c
index e8a9bee..8daccf6 100644
--- a/fs/nfs/super.c
+++ b/fs/nfs/super.c
@@ -685,37 +685,29 @@ static void nfs_init_timeout_values(stru
static struct rpc_clnt *
nfs_create_client(struct nfs_server *server, const struct nfs_mount_data *data)
{
+ struct rpc_clnt *clnt;
struct rpc_timeout timeparms;
- struct rpc_xprt *xprt = NULL;
- struct rpc_clnt *clnt = NULL;
- int proto = (data->flags & NFS_MOUNT_TCP) ? IPPROTO_TCP : IPPROTO_UDP;
-
- nfs_init_timeout_values(&timeparms, proto, data->timeo, data->retrans);
+ struct rpc_create_args args = {
+ .protocol = ((data->flags & NFS_MOUNT_TCP) ?
+ IPPROTO_TCP : IPPROTO_UDP),
+ .address = (struct sockaddr *)&server->addr,
+ .addrsize = sizeof(server->addr),
+ .timeout = &timeparms,
+ .servername = server->hostname,
+ .program = &nfs_program,
+ .version = server->rpc_ops->version,
+ .authflavor = data->pseudoflavor,
+ };
+ nfs_init_timeout_values(&timeparms, args.protocol,
+ data->timeo, data->retrans);
server->retrans_timeo = timeparms.to_initval;
server->retrans_count = timeparms.to_retries;
- /* create transport and client */
- xprt = xprt_create_proto(proto, &server->addr, &timeparms);
- if (IS_ERR(xprt)) {
- dprintk("%s: cannot create RPC transport. Error = %ld\n",
- __FUNCTION__, PTR_ERR(xprt));
- return (struct rpc_clnt *)xprt;
- }
- clnt = rpc_create_client(xprt, server->hostname, &nfs_program,
- server->rpc_ops->version, data->pseudoflavor);
- if (IS_ERR(clnt)) {
+ clnt = rpc_create(&args);
+ if (IS_ERR(clnt))
dprintk("%s: cannot create RPC client. Error = %ld\n",
- __FUNCTION__, PTR_ERR(xprt));
- goto out_fail;
- }
-
- clnt->cl_intr = 1;
- clnt->cl_softrtry = 1;
-
- return clnt;
-
-out_fail:
+ __FUNCTION__, PTR_ERR(clnt));
return clnt;
}
@@ -1122,11 +1114,14 @@ static int nfs_clone_nfs_sb(struct file_
}
#ifdef CONFIG_NFS_V4
+/*
+ * NB: nfs4_kill_super takes care of reaping the rpc_clnt if something
+ * here fails.
+ */
static struct rpc_clnt *nfs4_create_client(struct nfs_server *server,
struct rpc_timeout *timeparms, int proto, rpc_authflavor_t flavor)
{
struct nfs4_client *clp;
- struct rpc_xprt *xprt = NULL;
struct rpc_clnt *clnt = NULL;
int err = -EIO;
@@ -1136,21 +1131,20 @@ static struct rpc_clnt *nfs4_create_clie
return ERR_PTR(err);
}
- /* Now create transport and client */
down_write(&clp->cl_sem);
if (IS_ERR(clp->cl_rpcclient)) {
- xprt = xprt_create_proto(proto, &server->addr, timeparms);
- if (IS_ERR(xprt)) {
- up_write(&clp->cl_sem);
- err = PTR_ERR(xprt);
- dprintk("%s: cannot create RPC transport. Error = %d\n",
- __FUNCTION__, err);
- goto out_fail;
- }
- /* Bind to a reserved port! */
- xprt->resvport = 1;
- clnt = rpc_create_client(xprt, server->hostname, &nfs_program,
- server->rpc_ops->version, flavor);
+ struct rpc_create_args args = {
+ .protocol = proto,
+ .address = (struct sockaddr *)&server->addr,
+ .addrsize = sizeof(server->addr),
+ .timeout = timeparms,
+ .servername = server->hostname,
+ .program = &nfs_program,
+ .version = server->rpc_ops->version,
+ .authflavor = flavor,
+ };
+
+ clnt = rpc_create(&args);
if (IS_ERR(clnt)) {
up_write(&clp->cl_sem);
err = PTR_ERR(clnt);
@@ -1158,8 +1152,6 @@ static struct rpc_clnt *nfs4_create_clie
__FUNCTION__, err);
goto out_fail;
}
- clnt->cl_intr = 1;
- clnt->cl_softrtry = 1;
clp->cl_rpcclient = clnt;
memcpy(clp->cl_ipaddr, server->ip_addr, sizeof(clp->cl_ipaddr));
nfs_idmap_new(clp);
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, 2006-08-09 at 10:58 -0400, Chuck Lever wrote:
> Hide the contents and format of xprt->addr by eliminating direct uses
> of the xprt->addr.sin_port field. This change is required to support
> alternate RPC host address formats (eg IPv6).
>
> Test-plan:
> Destructive testing (unplugging the network temporarily). Repeated runs of
> Connectathon locking suite with UDP and TCP.
>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> include/linux/sunrpc/xprt.h | 16 ++++++++++++++++
> net/sunrpc/clnt.c | 10 +++++-----
> net/sunrpc/xprt.c | 6 +++++-
> net/sunrpc/xprtsock.c | 16 ++++++++++++----
> 4 files changed, 38 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
> index 3a0cca2..e65474f 100644
> --- a/include/linux/sunrpc/xprt.h
> +++ b/include/linux/sunrpc/xprt.h
> @@ -269,6 +269,7 @@ #define XPRT_LOCKED (0)
> #define XPRT_CONNECTED (1)
> #define XPRT_CONNECTING (2)
> #define XPRT_CLOSE_WAIT (3)
> +#define XPRT_BOUND (4)
>
> static inline void xprt_set_connected(struct rpc_xprt *xprt)
> {
> @@ -312,6 +313,21 @@ static inline int xprt_test_and_set_conn
> return test_and_set_bit(XPRT_CONNECTING, &xprt->state);
> }
>
> +static inline void xprt_set_bound(struct rpc_xprt *xprt)
> +{
> + set_bit(XPRT_BOUND, &xprt->state);
> +}
> +
> +static inline int xprt_bound(struct rpc_xprt *xprt)
> +{
> + return test_bit(XPRT_BOUND, &xprt->state);
> +}
> +
> +static inline void xprt_clear_bound(struct rpc_xprt *xprt)
> +{
> + clear_bit(XPRT_BOUND, &xprt->state);
> +}
> +
> #endif /* __KERNEL__*/
>
> #endif /* _LINUX_SUNRPC_XPRT_H */
> diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
> index d6409e7..4f353dd 100644
> --- a/net/sunrpc/clnt.c
> +++ b/net/sunrpc/clnt.c
> @@ -148,7 +148,6 @@ rpc_new_client(struct rpc_xprt *xprt, ch
> clnt->cl_maxproc = version->nrprocs;
> clnt->cl_protname = program->name;
> clnt->cl_pmap = &clnt->cl_pmap_default;
> - clnt->cl_port = xprt->addr.sin_port;
> clnt->cl_prog = program->number;
> clnt->cl_vers = version->number;
> clnt->cl_prot = xprt->prot;
> @@ -156,7 +155,7 @@ rpc_new_client(struct rpc_xprt *xprt, ch
> clnt->cl_metrics = rpc_alloc_iostats(clnt);
> rpc_init_wait_queue(&clnt->cl_pmap_default.pm_bindwait, "bindwait");
>
> - if (!clnt->cl_port)
> + if (!xprt_bound(clnt->cl_xprt))
> clnt->cl_autobind = 1;
>
> clnt->cl_rtt = &clnt->cl_rtt_default;
> @@ -573,7 +572,7 @@ EXPORT_SYMBOL(rpc_max_payload);
> void rpc_force_rebind(struct rpc_clnt *clnt)
> {
> if (clnt->cl_autobind)
> - clnt->cl_port = 0;
> + xprt_clear_bound(clnt->cl_xprt);
> }
> EXPORT_SYMBOL(rpc_force_rebind);
>
> @@ -785,14 +784,15 @@ static void
> call_bind(struct rpc_task *task)
> {
> struct rpc_clnt *clnt = task->tk_client;
> + struct rpc_xprt *xprt = task->tk_xprt;
>
> dprintk("RPC: %4d call_bind (status %d)\n",
> task->tk_pid, task->tk_status);
>
> task->tk_action = call_connect;
> - if (!clnt->cl_port) {
> + if (!xprt_bound(xprt)) {
> task->tk_action = call_bind_status;
> - task->tk_timeout = task->tk_xprt->bind_timeout;
> + task->tk_timeout = xprt->bind_timeout;
> rpc_getport(task, clnt);
> }
> }
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index e8c2bc4..10ba1f6 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -534,7 +534,11 @@ void xprt_connect(struct rpc_task *task)
> dprintk("RPC: %4d xprt_connect xprt %p %s connected\n", task->tk_pid,
> xprt, (xprt_connected(xprt) ? "is" : "is not"));
>
> - if (!xprt->addr.sin_port) {
> + if (xprt->shutdown) {
> + task->tk_status = -EIO;
> + return;
> + }
Why are you reinstating the test for xprt->shutdown? It was removed
because it is pretty much useless there. Any task should already have
been signalled to exit by rpc_shutdown_client()...
> + if (!xprt_bound(xprt)) {
> task->tk_status = -EIO;
> return;
> }
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index 441bd53..43b59c2 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -974,6 +974,8 @@ static void xs_set_port(struct rpc_xprt
> {
> dprintk("RPC: setting port for xprt %p to %u\n", xprt, port);
> xprt->addr.sin_port = htons(port);
> + if (port != 0)
> + xprt_set_bound(xprt);
Hmm... This looks odd. If port == 0, why not exit immediately?
Furthermore, what if the port is already bound: is it correct to set it
again? IOW: should it be conditional on test_and_set_bit()?
Cheers,
Trond
> }
>
> static int xs_bindresvport(struct rpc_xprt *xprt, struct socket *sock)
> @@ -1016,7 +1018,7 @@ static void xs_udp_connect_worker(void *
> struct socket *sock = xprt->sock;
> int err, status = -EIO;
>
> - if (xprt->shutdown || xprt->addr.sin_port == 0)
> + if (xprt->shutdown || !xprt_bound(xprt))
> goto out;
>
> dprintk("RPC: xs_udp_connect_worker for xprt %p\n", xprt);
> @@ -1099,7 +1101,7 @@ static void xs_tcp_connect_worker(void *
> struct socket *sock = xprt->sock;
> int err, status = -EIO;
>
> - if (xprt->shutdown || xprt->addr.sin_port == 0)
> + if (xprt->shutdown || !xprt_bound(xprt))
> goto out;
>
> dprintk("RPC: xs_tcp_connect_worker for xprt %p\n", xprt);
> @@ -1307,8 +1309,11 @@ int xs_setup_udp(struct rpc_xprt *xprt,
> if (xprt->slot == NULL)
> return -ENOMEM;
>
> - xprt->prot = IPPROTO_UDP;
> + if (ntohs(xprt->addr.sin_port) != 0)
> + xprt_set_bound(xprt);
> xprt->port = xs_get_random_port();
> +
> + xprt->prot = IPPROTO_UDP;
> xprt->tsh_size = 0;
> xprt->resvport = capable(CAP_NET_BIND_SERVICE) ? 1 : 0;
> /* XXX: header size can vary due to auth type, IPv6, etc. */
> @@ -1348,8 +1353,11 @@ int xs_setup_tcp(struct rpc_xprt *xprt,
> if (xprt->slot == NULL)
> return -ENOMEM;
>
> - xprt->prot = IPPROTO_TCP;
> + if (ntohs(xprt->addr.sin_port) != 0)
> + xprt_set_bound(xprt);
> xprt->port = xs_get_random_port();
> +
> + xprt->prot = IPPROTO_TCP;
> xprt->tsh_size = sizeof(rpc_fraghdr) / sizeof(u32);
> xprt->resvport = capable(CAP_NET_BIND_SERVICE) ? 1 : 0;
> xprt->max_payload = RPC_MAX_FRAGMENT_SIZE;
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On 8/9/06, Trond Myklebust <[email protected]> wrote:
> On Wed, 2006-08-09 at 10:58 -0400, Chuck Lever wrote:
> > diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> > index e8c2bc4..10ba1f6 100644
> > --- a/net/sunrpc/xprt.c
> > +++ b/net/sunrpc/xprt.c
> > @@ -534,7 +534,11 @@ void xprt_connect(struct rpc_task *task)
> > dprintk("RPC: %4d xprt_connect xprt %p %s connected\n", task->tk_pid,
> > xprt, (xprt_connected(xprt) ? "is" : "is not"));
> >
> > - if (!xprt->addr.sin_port) {
> > + if (xprt->shutdown) {
> > + task->tk_status = -EIO;
> > + return;
> > + }
>
> Why are you reinstating the test for xprt->shutdown? It was removed
> because it is pretty much useless there. Any task should already have
> been signalled to exit by rpc_shutdown_client()...
That's probably a silent git merge error. I'll remove that.
> > + if (!xprt_bound(xprt)) {
> > task->tk_status = -EIO;
> > return;
> > }
> > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > index 441bd53..43b59c2 100644
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -974,6 +974,8 @@ static void xs_set_port(struct rpc_xprt
> > {
> > dprintk("RPC: setting port for xprt %p to %u\n", xprt, port);
> > xprt->addr.sin_port = htons(port);
> > + if (port != 0)
> > + xprt_set_bound(xprt);
>
> Hmm... This looks odd. If port == 0, why not exit immediately?
>
> Furthermore, what if the port is already bound: is it correct to set it
> again? IOW: should it be conditional on test_and_set_bit()?
The portmapper can set the port number to zero in the case of an
rpcbind error. In that case, the transport should remain unbound.
--
"We who cut mere stones must always be envisioning cathedrals"
-- Quarry worker's creed
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Has no chance of applying 'cos it is based on the 2.6.18-rc tree. Please
pull from the NFS git tree.
Cheers,
Trond
On Wed, 2006-08-09 at 10:59 -0400, Chuck Lever wrote:
> Convert NFS client mount logic to use rpc_create() instead of the old
> xprt_create_proto/rpc_create_client API.
>
> Test plan:
> Mount stress tests.
>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> fs/nfs/mount_clnt.c | 29 +++++++++-----------
> fs/nfs/super.c | 74 +++++++++++++++++++++++----------------------------
> 2 files changed, 46 insertions(+), 57 deletions(-)
>
> diff --git a/fs/nfs/mount_clnt.c b/fs/nfs/mount_clnt.c
> index 4127487..ef8a3a0 100644
> --- a/fs/nfs/mount_clnt.c
> +++ b/fs/nfs/mount_clnt.c
> @@ -76,22 +76,19 @@ static struct rpc_clnt *
> mnt_create(char *hostname, struct sockaddr_in *srvaddr, int version,
> int protocol)
> {
> - struct rpc_xprt *xprt;
> - struct rpc_clnt *clnt;
> -
> - xprt = xprt_create_proto(protocol, srvaddr, NULL);
> - if (IS_ERR(xprt))
> - return (struct rpc_clnt *)xprt;
> -
> - clnt = rpc_create_client(xprt, hostname,
> - &mnt_program, version,
> - RPC_AUTH_UNIX);
> - if (!IS_ERR(clnt)) {
> - clnt->cl_softrtry = 1;
> - clnt->cl_oneshot = 1;
> - clnt->cl_intr = 1;
> - }
> - return clnt;
> + struct rpc_create_args args = {
> + .protocol = protocol,
> + .address = (struct sockaddr *)srvaddr,
> + .addrsize = sizeof(*srvaddr),
> + .servername = hostname,
> + .program = &mnt_program,
> + .version = version,
> + .authflavor = RPC_AUTH_UNIX,
> + .flags = (RPC_CLNT_CREATE_ONESHOT |
> + RPC_CLNT_INTR),
> + };
> +
> + return rpc_create(&args);
> }
>
> /*
> diff --git a/fs/nfs/super.c b/fs/nfs/super.c
> index e8a9bee..8daccf6 100644
> --- a/fs/nfs/super.c
> +++ b/fs/nfs/super.c
> @@ -685,37 +685,29 @@ static void nfs_init_timeout_values(stru
> static struct rpc_clnt *
> nfs_create_client(struct nfs_server *server, const struct nfs_mount_data *data)
> {
> + struct rpc_clnt *clnt;
> struct rpc_timeout timeparms;
> - struct rpc_xprt *xprt = NULL;
> - struct rpc_clnt *clnt = NULL;
> - int proto = (data->flags & NFS_MOUNT_TCP) ? IPPROTO_TCP : IPPROTO_UDP;
> -
> - nfs_init_timeout_values(&timeparms, proto, data->timeo, data->retrans);
> + struct rpc_create_args args = {
> + .protocol = ((data->flags & NFS_MOUNT_TCP) ?
> + IPPROTO_TCP : IPPROTO_UDP),
> + .address = (struct sockaddr *)&server->addr,
> + .addrsize = sizeof(server->addr),
> + .timeout = &timeparms,
> + .servername = server->hostname,
> + .program = &nfs_program,
> + .version = server->rpc_ops->version,
> + .authflavor = data->pseudoflavor,
> + };
>
> + nfs_init_timeout_values(&timeparms, args.protocol,
> + data->timeo, data->retrans);
> server->retrans_timeo = timeparms.to_initval;
> server->retrans_count = timeparms.to_retries;
>
> - /* create transport and client */
> - xprt = xprt_create_proto(proto, &server->addr, &timeparms);
> - if (IS_ERR(xprt)) {
> - dprintk("%s: cannot create RPC transport. Error = %ld\n",
> - __FUNCTION__, PTR_ERR(xprt));
> - return (struct rpc_clnt *)xprt;
> - }
> - clnt = rpc_create_client(xprt, server->hostname, &nfs_program,
> - server->rpc_ops->version, data->pseudoflavor);
> - if (IS_ERR(clnt)) {
> + clnt = rpc_create(&args);
> + if (IS_ERR(clnt))
> dprintk("%s: cannot create RPC client. Error = %ld\n",
> - __FUNCTION__, PTR_ERR(xprt));
> - goto out_fail;
> - }
> -
> - clnt->cl_intr = 1;
> - clnt->cl_softrtry = 1;
> -
> - return clnt;
> -
> -out_fail:
> + __FUNCTION__, PTR_ERR(clnt));
> return clnt;
> }
>
> @@ -1122,11 +1114,14 @@ static int nfs_clone_nfs_sb(struct file_
> }
>
> #ifdef CONFIG_NFS_V4
> +/*
> + * NB: nfs4_kill_super takes care of reaping the rpc_clnt if something
> + * here fails.
> + */
> static struct rpc_clnt *nfs4_create_client(struct nfs_server *server,
> struct rpc_timeout *timeparms, int proto, rpc_authflavor_t flavor)
> {
> struct nfs4_client *clp;
> - struct rpc_xprt *xprt = NULL;
> struct rpc_clnt *clnt = NULL;
> int err = -EIO;
>
> @@ -1136,21 +1131,20 @@ static struct rpc_clnt *nfs4_create_clie
> return ERR_PTR(err);
> }
>
> - /* Now create transport and client */
> down_write(&clp->cl_sem);
> if (IS_ERR(clp->cl_rpcclient)) {
> - xprt = xprt_create_proto(proto, &server->addr, timeparms);
> - if (IS_ERR(xprt)) {
> - up_write(&clp->cl_sem);
> - err = PTR_ERR(xprt);
> - dprintk("%s: cannot create RPC transport. Error = %d\n",
> - __FUNCTION__, err);
> - goto out_fail;
> - }
> - /* Bind to a reserved port! */
> - xprt->resvport = 1;
> - clnt = rpc_create_client(xprt, server->hostname, &nfs_program,
> - server->rpc_ops->version, flavor);
> + struct rpc_create_args args = {
> + .protocol = proto,
> + .address = (struct sockaddr *)&server->addr,
> + .addrsize = sizeof(server->addr),
> + .timeout = timeparms,
> + .servername = server->hostname,
> + .program = &nfs_program,
> + .version = server->rpc_ops->version,
> + .authflavor = flavor,
> + };
> +
> + clnt = rpc_create(&args);
> if (IS_ERR(clnt)) {
> up_write(&clp->cl_sem);
> err = PTR_ERR(clnt);
> @@ -1158,8 +1152,6 @@ static struct rpc_clnt *nfs4_create_clie
> __FUNCTION__, err);
> goto out_fail;
> }
> - clnt->cl_intr = 1;
> - clnt->cl_softrtry = 1;
> clp->cl_rpcclient = clnt;
> memcpy(clp->cl_ipaddr, server->ip_addr, sizeof(clp->cl_ipaddr));
> nfs_idmap_new(clp);
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, 2006-08-09 at 10:59 -0400, Chuck Lever wrote:
> If the LOOKUP or GETATTR in nfs_instantiate fail, nfs_instantiate will do a
> d_drop before returning. But some callers already do a d_drop in the case
> of an error return. Make certain we do only one d_drop in all error paths.
Hmm... Calling d_drop() twice is at worst an inefficiency. It is not
strictly speaking a bug.
> This bug was introduced because over time, the symlink proc API diverged
> slightly from the create/mkdir/mknod proc API. To prevent other bugs of
> this type, change the symlink proc API to be more like create/mkdir/mknod
> and move the nfs_instantiate call into the symlink proc routines so it is
> used in exactly the same way for create, mkdir, mknod, and symlink.
>
> Test plan:
> Connectathon, all versions of NFS.
>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> fs/nfs/dir.c | 16 ++++------------
> fs/nfs/nfs3proc.c | 26 ++++++++++++++++----------
> fs/nfs/nfs4proc.c | 31 ++++++++++++++++---------------
> fs/nfs/proc.c | 29 +++++++++++++++++++++--------
> include/linux/nfs_xdr.h | 5 ++---
> 5 files changed, 59 insertions(+), 48 deletions(-)
>
> diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
> index 428d963..ff4e852 100644
> --- a/fs/nfs/dir.c
> +++ b/fs/nfs/dir.c
> @@ -1143,23 +1143,20 @@ int nfs_instantiate(struct dentry *dentr
> struct inode *dir = dentry->d_parent->d_inode;
> error = NFS_PROTO(dir)->lookup(dir, &dentry->d_name, fhandle, fattr);
> if (error)
> - goto out_err;
> + return error;
> }
> if (!(fattr->valid & NFS_ATTR_FATTR)) {
> struct nfs_server *server = NFS_SB(dentry->d_sb);
> error = server->rpc_ops->getattr(server, fhandle, fattr);
> if (error < 0)
> - goto out_err;
> + return error;
> }
> inode = nfs_fhget(dentry->d_sb, fhandle, fattr);
> error = PTR_ERR(inode);
> if (IS_ERR(inode))
> - goto out_err;
> + return error;
> d_instantiate(dentry, inode);
> return 0;
> -out_err:
> - d_drop(dentry);
> - return error;
> }
>
> /*
> @@ -1444,8 +1441,6 @@ static int
> nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname)
> {
> struct iattr attr;
> - struct nfs_fattr sym_attr;
> - struct nfs_fh sym_fh;
> struct qstr qsymname;
> int error;
>
> @@ -1469,12 +1464,9 @@ #endif
>
> lock_kernel();
> nfs_begin_data_update(dir);
> - error = NFS_PROTO(dir)->symlink(dir, &dentry->d_name, &qsymname,
> - &attr, &sym_fh, &sym_attr);
> + error = NFS_PROTO(dir)->symlink(dir, dentry, &qsymname, &attr);
> nfs_end_data_update(dir);
> if (!error)
> - error = nfs_instantiate(dentry, &sym_fh, &sym_attr);
> - else
> d_drop(dentry);
> unlock_kernel();
> return error;
> diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
> index 7143b1f..15eac8d 100644
> --- a/fs/nfs/nfs3proc.c
> +++ b/fs/nfs/nfs3proc.c
> @@ -544,23 +544,23 @@ nfs3_proc_link(struct inode *inode, stru
> }
>
> static int
> -nfs3_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
> - struct iattr *sattr, struct nfs_fh *fhandle,
> - struct nfs_fattr *fattr)
> +nfs3_proc_symlink(struct inode *dir, struct dentry *dentry, struct qstr *path,
> + struct iattr *sattr)
> {
> - struct nfs_fattr dir_attr;
> + struct nfs_fh fhandle;
> + struct nfs_fattr fattr, dir_attr;
> struct nfs3_symlinkargs arg = {
> .fromfh = NFS_FH(dir),
> - .fromname = name->name,
> - .fromlen = name->len,
> + .fromname = dentry->d_name.name,
> + .fromlen = dentry->d_name.len,
> .topath = path->name,
> .tolen = path->len,
> .sattr = sattr
> };
> struct nfs3_diropres res = {
> .dir_attr = &dir_attr,
> - .fh = fhandle,
> - .fattr = fattr
> + .fh = &fhandle,
> + .fattr = &fattr
> };
> struct rpc_message msg = {
> .rpc_proc = &nfs3_procedures[NFS3PROC_SYMLINK],
> @@ -571,11 +571,17 @@ nfs3_proc_symlink(struct inode *dir, str
>
> if (path->len > NFS3_MAXPATHLEN)
> return -ENAMETOOLONG;
> - dprintk("NFS call symlink %s -> %s\n", name->name, path->name);
> +
> + dprintk("NFS call symlink %s -> %s\n", dentry->d_name.name,
> + path->name);
> nfs_fattr_init(&dir_attr);
> - nfs_fattr_init(fattr);
> + nfs_fattr_init(&fattr);
> status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
> nfs_post_op_update_inode(dir, &dir_attr);
> + if (status != 0)
> + goto out;
> + status = nfs_instantiate(dentry, &fhandle, &fattr);
> +out:
> dprintk("NFS reply symlink: %d\n", status);
> return status;
> }
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index e6ee97f..370b5ab 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -2089,24 +2089,24 @@ static int nfs4_proc_link(struct inode *
> return err;
> }
>
> -static int _nfs4_proc_symlink(struct inode *dir, struct qstr *name,
> - struct qstr *path, struct iattr *sattr, struct nfs_fh *fhandle,
> - struct nfs_fattr *fattr)
> +static int _nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
> + struct qstr *path, struct iattr *sattr)
> {
> struct nfs_server *server = NFS_SERVER(dir);
> - struct nfs_fattr dir_fattr;
> + struct nfs_fh fhandle;
> + struct nfs_fattr fattr, dir_fattr;
> struct nfs4_create_arg arg = {
> .dir_fh = NFS_FH(dir),
> .server = server,
> - .name = name,
> + .name = &dentry->d_name,
> .attrs = sattr,
> .ftype = NF4LNK,
> .bitmask = server->attr_bitmask,
> };
> struct nfs4_create_res res = {
> .server = server,
> - .fh = fhandle,
> - .fattr = fattr,
> + .fh = &fhandle,
> + .fattr = &fattr,
> .dir_fattr = &dir_fattr,
> };
> struct rpc_message msg = {
> @@ -2118,27 +2118,28 @@ static int _nfs4_proc_symlink(struct ino
>
> if (path->len > NFS4_MAXPATHLEN)
> return -ENAMETOOLONG;
> +
> arg.u.symlink = path;
> - nfs_fattr_init(fattr);
> + nfs_fattr_init(&fattr);
> nfs_fattr_init(&dir_fattr);
>
> status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
> - if (!status)
> + if (!status) {
> update_changeattr(dir, &res.dir_cinfo);
> - nfs_post_op_update_inode(dir, res.dir_fattr);
> + nfs_post_op_update_inode(dir, res.dir_fattr);
> + status = nfs_instantiate(dentry, &fhandle, &fattr);
> + }
> return status;
> }
>
> -static int nfs4_proc_symlink(struct inode *dir, struct qstr *name,
> - struct qstr *path, struct iattr *sattr, struct nfs_fh *fhandle,
> - struct nfs_fattr *fattr)
> +static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
> + struct qstr *path, struct iattr *sattr)
> {
> struct nfs4_exception exception = { };
> int err;
> do {
> err = nfs4_handle_exception(NFS_SERVER(dir),
> - _nfs4_proc_symlink(dir, name, path, sattr,
> - fhandle, fattr),
> + _nfs4_proc_symlink(dir, dentry, path, sattr),
> &exception);
> } while (exception.retry);
> return err;
> diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c
> index b3899ea..7512f71 100644
> --- a/fs/nfs/proc.c
> +++ b/fs/nfs/proc.c
> @@ -425,14 +425,15 @@ nfs_proc_link(struct inode *inode, struc
> }
>
> static int
> -nfs_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
> - struct iattr *sattr, struct nfs_fh *fhandle,
> - struct nfs_fattr *fattr)
> +nfs_proc_symlink(struct inode *dir, struct dentry *dentry, struct qstr *path,
> + struct iattr *sattr)
> {
> + struct nfs_fh fhandle;
> + struct nfs_fattr fattr;
> struct nfs_symlinkargs arg = {
> .fromfh = NFS_FH(dir),
> - .fromname = name->name,
> - .fromlen = name->len,
> + .fromname = dentry->d_name.name,
> + .fromlen = dentry->d_name.len,
> .topath = path->name,
> .tolen = path->len,
> .sattr = sattr
> @@ -445,11 +446,23 @@ nfs_proc_symlink(struct inode *dir, stru
>
> if (path->len > NFS2_MAXPATHLEN)
> return -ENAMETOOLONG;
> - dprintk("NFS call symlink %s -> %s\n", name->name, path->name);
> - nfs_fattr_init(fattr);
> - fhandle->size = 0;
> +
> + dprintk("NFS call symlink %s -> %s\n", dentry->d_name.name,
> + path->name);
> status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
> nfs_mark_for_revalidate(dir);
> +
> + /*
> + * V2 SYMLINK requests don't return any attributes. Setting the
> + * filehandle size to zero indicates to nfs_instantiate that it
> + * should fill in the data with a LOOKUP call on the wire.
> + */
> + if (status == 0) {
> + nfs_fattr_init(&fattr);
> + fhandle.size = 0;
> + status = nfs_instantiate(dentry, &fhandle, &fattr);
> + }
> +
> dprintk("NFS reply symlink: %d\n", status);
> return status;
> }
> diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
> index 0c1093c..cfabcd1 100644
> --- a/include/linux/nfs_xdr.h
> +++ b/include/linux/nfs_xdr.h
> @@ -790,9 +790,8 @@ struct nfs_rpc_ops {
> int (*rename) (struct inode *, struct qstr *,
> struct inode *, struct qstr *);
> int (*link) (struct inode *, struct inode *, struct qstr *);
> - int (*symlink) (struct inode *, struct qstr *, struct qstr *,
> - struct iattr *, struct nfs_fh *,
> - struct nfs_fattr *);
> + int (*symlink) (struct inode *, struct dentry *, struct qstr *,
> + struct iattr *);
> int (*mkdir) (struct inode *, struct dentry *, struct iattr *);
> int (*rmdir) (struct inode *, struct qstr *);
> int (*readdir) (struct dentry *, struct rpc_cred *,
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Wed, 2006-08-09 at 11:27 -0400, Chuck Lever wrote:
> The portmapper can set the port number to zero in the case of an
> rpcbind error. In that case, the transport should remain unbound.
Validating the results of the rpcbind operation is a different issue.
You can test for port == 0 first, then do a test_and_set_bit()...
Cheers,
Trond
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Thu, 2006-08-10 at 00:59, Chuck Lever wrote:
> In the early days of NFS, there was no duplicate reply cache on the server.
> Thus retransmitted non-idempotent requests often found that the request had
> already completed on the server. To avoid passing an unanticipated return
> code to unsuspecting applications, NFS clients would often shunt error
> codes that implied the request had been retried but already completed.
>
> On modern NFS clients, it is safe to remove such checks.
I'm not sure why you have such faith in servers' repcaches. The Linux
knfsd repcache as currently coded is fundamentally useless at modern
call rates. If this error case isn't needed anymore it's probably
because we have fewer lost calls thanks to using TCP instead of UDP.
Greg.
--
Greg Banks, R&D Software Engineer, SGI Australian Software Group.
I don't speak for SGI.
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Greg Banks wrote:
> On Thu, 2006-08-10 at 00:59, Chuck Lever wrote:
>
>> In the early days of NFS, there was no duplicate reply cache on the server.
>> Thus retransmitted non-idempotent requests often found that the request had
>> already completed on the server. To avoid passing an unanticipated return
>> code to unsuspecting applications, NFS clients would often shunt error
>> codes that implied the request had been retried but already completed.
>>
>> On modern NFS clients, it is safe to remove such checks.
>>
>
> I'm not sure why you have such faith in servers' repcaches. The Linux
> knfsd repcache as currently coded is fundamentally useless at modern
> call rates. If this error case isn't needed anymore it's probably
> because we have fewer lost calls thanks to using TCP instead of UDP.
>
It is true that the duplicate request cache in Linux and probably most other
systems, definitely including Solaris, is undersized. However, most
clients,
definitely including Solaris, do not contain special code. They simply
depend upon the server to work correctly.
I suspect that it is the use of TCP which has minimized the number of
complaints about failing non-idempotent requests. Given that TCP is
becoming
more and more widespread, and that TCP is the only valid transport for
NFSv4,
despite the Linux implementation, it seems to make sense to minimize the
overhead on the client.
ps
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs