2015-05-26 17:48:31

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 00/10] NFS/RDMA server patches for 4.2

Hi Bruce-

I'd like to see these merged into 4.2. This patch series includes:

- Don't use a non-byte-swapped value as an array index
- Fix use of copy_to_user() in a kernel/user space API
- Annotate memory allocations
- A server-side pre-requisite for increasing r/wsize on the client
- A server-side pre-requisite for client RPC/RDMA bi-direction
- Merged-together xprtrdma and svcrdma kernel modules
- Miscellaneous clean-ups

You can find these in my git repo in the "nfsd-rdma-for-4.2" topic
branch. See:

git://git.linux-nfs.org/projects/cel/cel-2.6.git


Changes since v1:
- Rebased on v4.1-rc5
- Add patch to merge xprtrdma.ko and svcrdma.ko into one module

---

Chuck Lever (10):
rpcrdma: Merge svcrdma and xprtrdma modules into one
SUNRPC: Clean up bc_send()
svcrdma: Add backward direction service for RPC/RDMA transport
svcrdma: Add a separate "max data segs macro for svcrdma
svcrdma: Replace GFP_KERNEL in a loop with GFP_NOFAIL
svcrdma: Keep rpcrdma_msg fields in network byte-order
svcrdma: Remove svc_rdma_xdr_decode_deferred_req()
SUNRPC: Move EXPORT_SYMBOL for svc_process
svcrdma: Add missing access_ok() call in svc_rdma.c
svcrdma: Fix byte-swapping in svc_rdma_sendto.c


include/linux/sunrpc/bc_xprt.h | 1
include/linux/sunrpc/svc_rdma.h | 17 +++-
include/linux/sunrpc/xprt.h | 1
net/sunrpc/Kconfig | 28 ++----
net/sunrpc/Makefile | 5 -
net/sunrpc/bc_svc.c | 63 --------------
net/sunrpc/svc.c | 35 ++++++--
net/sunrpc/xprtrdma/Makefile | 14 +--
net/sunrpc/xprtrdma/module.c | 46 ++++++++++
net/sunrpc/xprtrdma/svc_rdma.c | 22 +++--
net/sunrpc/xprtrdma/svc_rdma_marshal.c | 140 ++++++++----------------------
net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 2
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 16 ++-
net/sunrpc/xprtrdma/svc_rdma_transport.c | 95 ++++++++++++++------
net/sunrpc/xprtrdma/transport.c | 13 ---
net/sunrpc/xprtrdma/xprt_rdma.h | 11 +-
16 files changed, 238 insertions(+), 271 deletions(-)
delete mode 100644 net/sunrpc/bc_svc.c
create mode 100644 net/sunrpc/xprtrdma/module.c

--
Chuck Lever


2015-05-26 17:48:51

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 02/10] svcrdma: Add missing access_ok() call in svc_rdma.c

Ensure a proper memory access check is done by read_reset_stat(),
then fix the following compiler warning.

In file included from linux-2.6/include/net/checksum.h:25,
from linux-2.6/include/linux/skbuff.h:31,
from linux-2.6/include/linux/icmpv6.h:4,
from linux-2.6/include/linux/ipv6.h:64,
from linux-2.6/include/net/ipv6.h:16,
from linux-2.6/include/linux/sunrpc/clnt.h:27,
from linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:47:
In function ‘copy_to_user’,
inlined from ‘read_reset_stat’ at
linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:113:
linux-2.6/arch/x86/include/asm/uaccess.h:735: warning:
call to ‘__copy_to_user_overflow’ declared with attribute warning:
copy_to_user() buffer size is not provably correct

Signed-off-by: Chuck Lever <[email protected]>
---

net/sunrpc/xprtrdma/svc_rdma.c | 8 ++++++--
1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
index c1b6270..8eedb60 100644
--- a/net/sunrpc/xprtrdma/svc_rdma.c
+++ b/net/sunrpc/xprtrdma/svc_rdma.c
@@ -98,7 +98,11 @@ static int read_reset_stat(struct ctl_table *table, int write,
else {
char str_buf[32];
char *data;
- int len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
+ int len;
+
+ if (!access_ok(VERIFY_WRITE, buffer, *lenp))
+ return -EFAULT;
+ len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
if (len >= 32)
return -EFAULT;
len = strlen(str_buf);
@@ -110,7 +114,7 @@ static int read_reset_stat(struct ctl_table *table, int write,
len -= *ppos;
if (len > *lenp)
len = *lenp;
- if (len && copy_to_user(buffer, str_buf, len))
+ if (len && __copy_to_user(buffer, str_buf, len))
return -EFAULT;
*lenp = len;
*ppos += len;


2015-05-26 17:48:40

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 01/10] svcrdma: Fix byte-swapping in svc_rdma_sendto.c

In send_write_chunks(), we have:

for (xdr_off = rqstp->rq_res.head[0].iov_len, chunk_no = 0;
xfer_len && chunk_no < arg_ary->wc_nchunks;
chunk_no++) {
. . .
}

Note that arg_ary->wc_nchunk is in network byte-order. For the
comparison to work correctly, both have to be in native byte-order.

In send_reply_chunks, we have:

write_len = min(xfer_len, htonl(ch->rs_length));

xfer_len is in native byte-order, and ch->rs_length is in
network byte-order. be32_to_cpu() is the correct byte swap
for ch->rs_length.

As an additional clean up, replace ntohl() with be32_to_cpu() in
a few other places.

This appears to address a problem with large rsize hangs while
using PHYSICAL memory registration. I suspect that is the only
registration mode that uses more than one chunk element.

BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=248
Signed-off-by: Chuck Lever <[email protected]>
---

net/sunrpc/xprtrdma/svc_rdma_sendto.c | 14 ++++++++------
1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 7de33d1..109e967 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -240,6 +240,7 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
u32 xdr_off;
int chunk_off;
int chunk_no;
+ int nchunks;
struct rpcrdma_write_array *arg_ary;
struct rpcrdma_write_array *res_ary;
int ret;
@@ -251,14 +252,15 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
&rdma_resp->rm_body.rm_chunks[1];

/* Write chunks start at the pagelist */
+ nchunks = be32_to_cpu(arg_ary->wc_nchunks);
for (xdr_off = rqstp->rq_res.head[0].iov_len, chunk_no = 0;
- xfer_len && chunk_no < arg_ary->wc_nchunks;
+ xfer_len && chunk_no < nchunks;
chunk_no++) {
struct rpcrdma_segment *arg_ch;
u64 rs_offset;

arg_ch = &arg_ary->wc_array[chunk_no].wc_target;
- write_len = min(xfer_len, ntohl(arg_ch->rs_length));
+ write_len = min(xfer_len, be32_to_cpu(arg_ch->rs_length));

/* Prepare the response chunk given the length actually
* written */
@@ -270,7 +272,7 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
chunk_off = 0;
while (write_len) {
ret = send_write(xprt, rqstp,
- ntohl(arg_ch->rs_handle),
+ be32_to_cpu(arg_ch->rs_handle),
rs_offset + chunk_off,
xdr_off,
write_len,
@@ -318,13 +320,13 @@ static int send_reply_chunks(struct svcxprt_rdma *xprt,
&rdma_resp->rm_body.rm_chunks[2];

/* xdr offset starts at RPC message */
- nchunks = ntohl(arg_ary->wc_nchunks);
+ nchunks = be32_to_cpu(arg_ary->wc_nchunks);
for (xdr_off = 0, chunk_no = 0;
xfer_len && chunk_no < nchunks;
chunk_no++) {
u64 rs_offset;
ch = &arg_ary->wc_array[chunk_no].wc_target;
- write_len = min(xfer_len, htonl(ch->rs_length));
+ write_len = min(xfer_len, be32_to_cpu(ch->rs_length));

/* Prepare the reply chunk given the length actually
* written */
@@ -335,7 +337,7 @@ static int send_reply_chunks(struct svcxprt_rdma *xprt,
chunk_off = 0;
while (write_len) {
ret = send_write(xprt, rqstp,
- ntohl(ch->rs_handle),
+ be32_to_cpu(ch->rs_handle),
rs_offset + chunk_off,
xdr_off,
write_len,


2015-05-26 17:49:01

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 03/10] SUNRPC: Move EXPORT_SYMBOL for svc_process

Clean up.

Signed-off-by: Chuck Lever <[email protected]>
---

net/sunrpc/svc.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 78974e4..852ae60 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -1290,7 +1290,6 @@ err_bad:
svc_putnl(resv, ntohl(rpc_stat));
goto sendit;
}
-EXPORT_SYMBOL_GPL(svc_process);

/*
* Process the RPC request.
@@ -1338,6 +1337,7 @@ out_drop:
svc_drop(rqstp);
return 0;
}
+EXPORT_SYMBOL_GPL(svc_process);

#if defined(CONFIG_SUNRPC_BACKCHANNEL)
/*


2015-05-26 17:49:10

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 04/10] svcrdma: Remove svc_rdma_xdr_decode_deferred_req()

svc_rdma_xdr_decode_deferred_req() indexes an array with an
un-byte-swapped value off the wire. Fortunately this function
isn't used anywhere, so simply remove it.

Signed-off-by: Chuck Lever <[email protected]>
---

include/linux/sunrpc/svc_rdma.h | 1 -
net/sunrpc/xprtrdma/svc_rdma_marshal.c | 56 --------------------------------
2 files changed, 0 insertions(+), 57 deletions(-)

diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index df8edf8..8ad9b6d 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -182,7 +182,6 @@ struct svcxprt_rdma {

/* svc_rdma_marshal.c */
extern int svc_rdma_xdr_decode_req(struct rpcrdma_msg **, struct svc_rqst *);
-extern int svc_rdma_xdr_decode_deferred_req(struct svc_rqst *);
extern int svc_rdma_xdr_encode_error(struct svcxprt_rdma *,
struct rpcrdma_msg *,
enum rpcrdma_errcode, u32 *);
diff --git a/net/sunrpc/xprtrdma/svc_rdma_marshal.c b/net/sunrpc/xprtrdma/svc_rdma_marshal.c
index b681855..45e22c9 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_marshal.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_marshal.c
@@ -211,62 +211,6 @@ int svc_rdma_xdr_decode_req(struct rpcrdma_msg **rdma_req,
return hdr_len;
}

-int svc_rdma_xdr_decode_deferred_req(struct svc_rqst *rqstp)
-{
- struct rpcrdma_msg *rmsgp = NULL;
- struct rpcrdma_read_chunk *ch;
- struct rpcrdma_write_array *ary;
- u32 *va;
- u32 hdrlen;
-
- dprintk("svcrdma: processing deferred RDMA header on rqstp=%p\n",
- rqstp);
- rmsgp = (struct rpcrdma_msg *)rqstp->rq_arg.head[0].iov_base;
-
- /* Pull in the extra for the padded case and bump our pointer */
- if (rmsgp->rm_type == RDMA_MSGP) {
- va = &rmsgp->rm_body.rm_padded.rm_pempty[4];
- rqstp->rq_arg.head[0].iov_base = va;
- hdrlen = (u32)((unsigned long)va - (unsigned long)rmsgp);
- rqstp->rq_arg.head[0].iov_len -= hdrlen;
- return hdrlen;
- }
-
- /*
- * Skip all chunks to find RPC msg. These were previously processed
- */
- va = &rmsgp->rm_body.rm_chunks[0];
-
- /* Skip read-list */
- for (ch = (struct rpcrdma_read_chunk *)va;
- ch->rc_discrim != xdr_zero; ch++);
- va = (u32 *)&ch->rc_position;
-
- /* Skip write-list */
- ary = (struct rpcrdma_write_array *)va;
- if (ary->wc_discrim == xdr_zero)
- va = (u32 *)&ary->wc_nchunks;
- else
- /*
- * rs_length is the 2nd 4B field in wc_target and taking its
- * address skips the list terminator
- */
- va = (u32 *)&ary->wc_array[ary->wc_nchunks].wc_target.rs_length;
-
- /* Skip reply-array */
- ary = (struct rpcrdma_write_array *)va;
- if (ary->wc_discrim == xdr_zero)
- va = (u32 *)&ary->wc_nchunks;
- else
- va = (u32 *)&ary->wc_array[ary->wc_nchunks];
-
- rqstp->rq_arg.head[0].iov_base = va;
- hdrlen = (unsigned long)va - (unsigned long)rmsgp;
- rqstp->rq_arg.head[0].iov_len -= hdrlen;
-
- return hdrlen;
-}
-
int svc_rdma_xdr_encode_error(struct svcxprt_rdma *xprt,
struct rpcrdma_msg *rmsgp,
enum rpcrdma_errcode err, u32 *va)


2015-05-26 17:49:19

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 05/10] svcrdma: Keep rpcrdma_msg fields in network byte-order

Fields in struct rpcrdma_msg are __be32. Don't byte-swap these
fields when decoding RPC calls and then swap them back for the
reply. For the most part, they can be left alone.

Signed-off-by: Chuck Lever <[email protected]>
---

include/linux/sunrpc/svc_rdma.h | 2 -
net/sunrpc/xprtrdma/svc_rdma_marshal.c | 84 ++++++++++++++----------------
net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 2 -
net/sunrpc/xprtrdma/svc_rdma_transport.c | 2 -
4 files changed, 42 insertions(+), 48 deletions(-)

diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index 8ad9b6d..c03ca0a 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -184,7 +184,7 @@ struct svcxprt_rdma {
extern int svc_rdma_xdr_decode_req(struct rpcrdma_msg **, struct svc_rqst *);
extern int svc_rdma_xdr_encode_error(struct svcxprt_rdma *,
struct rpcrdma_msg *,
- enum rpcrdma_errcode, u32 *);
+ enum rpcrdma_errcode, __be32 *);
extern void svc_rdma_xdr_encode_write_list(struct rpcrdma_msg *, int);
extern void svc_rdma_xdr_encode_reply_array(struct rpcrdma_write_array *, int);
extern void svc_rdma_xdr_encode_array_chunk(struct rpcrdma_write_array *, int,
diff --git a/net/sunrpc/xprtrdma/svc_rdma_marshal.c b/net/sunrpc/xprtrdma/svc_rdma_marshal.c
index 45e22c9..e2fca76 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_marshal.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_marshal.c
@@ -50,12 +50,12 @@
/*
* Decodes a read chunk list. The expected format is as follows:
* descrim : xdr_one
- * position : u32 offset into XDR stream
- * handle : u32 RKEY
+ * position : __be32 offset into XDR stream
+ * handle : __be32 RKEY
* . . .
* end-of-list: xdr_zero
*/
-static u32 *decode_read_list(u32 *va, u32 *vaend)
+static __be32 *decode_read_list(__be32 *va, __be32 *vaend)
{
struct rpcrdma_read_chunk *ch = (struct rpcrdma_read_chunk *)va;

@@ -67,20 +67,20 @@ static u32 *decode_read_list(u32 *va, u32 *vaend)
}
ch++;
}
- return (u32 *)&ch->rc_position;
+ return &ch->rc_position;
}

/*
* Decodes a write chunk list. The expected format is as follows:
* descrim : xdr_one
* nchunks : <count>
- * handle : u32 RKEY ---+
- * length : u32 <len of segment> |
+ * handle : __be32 RKEY ---+
+ * length : __be32 <len of segment> |
* offset : remove va + <count>
* . . . |
* ---+
*/
-static u32 *decode_write_list(u32 *va, u32 *vaend)
+static __be32 *decode_write_list(__be32 *va, __be32 *vaend)
{
unsigned long start, end;
int nchunks;
@@ -90,14 +90,14 @@ static u32 *decode_write_list(u32 *va, u32 *vaend)

/* Check for not write-array */
if (ary->wc_discrim == xdr_zero)
- return (u32 *)&ary->wc_nchunks;
+ return &ary->wc_nchunks;

if ((unsigned long)ary + sizeof(struct rpcrdma_write_array) >
(unsigned long)vaend) {
dprintk("svcrdma: ary=%p, vaend=%p\n", ary, vaend);
return NULL;
}
- nchunks = ntohl(ary->wc_nchunks);
+ nchunks = be32_to_cpu(ary->wc_nchunks);

start = (unsigned long)&ary->wc_array[0];
end = (unsigned long)vaend;
@@ -112,10 +112,10 @@ static u32 *decode_write_list(u32 *va, u32 *vaend)
* rs_length is the 2nd 4B field in wc_target and taking its
* address skips the list terminator
*/
- return (u32 *)&ary->wc_array[nchunks].wc_target.rs_length;
+ return &ary->wc_array[nchunks].wc_target.rs_length;
}

-static u32 *decode_reply_array(u32 *va, u32 *vaend)
+static __be32 *decode_reply_array(__be32 *va, __be32 *vaend)
{
unsigned long start, end;
int nchunks;
@@ -124,14 +124,14 @@ static u32 *decode_reply_array(u32 *va, u32 *vaend)

/* Check for no reply-array */
if (ary->wc_discrim == xdr_zero)
- return (u32 *)&ary->wc_nchunks;
+ return &ary->wc_nchunks;

if ((unsigned long)ary + sizeof(struct rpcrdma_write_array) >
(unsigned long)vaend) {
dprintk("svcrdma: ary=%p, vaend=%p\n", ary, vaend);
return NULL;
}
- nchunks = ntohl(ary->wc_nchunks);
+ nchunks = be32_to_cpu(ary->wc_nchunks);

start = (unsigned long)&ary->wc_array[0];
end = (unsigned long)vaend;
@@ -142,15 +142,14 @@ static u32 *decode_reply_array(u32 *va, u32 *vaend)
ary, nchunks, vaend);
return NULL;
}
- return (u32 *)&ary->wc_array[nchunks];
+ return (__be32 *)&ary->wc_array[nchunks];
}

int svc_rdma_xdr_decode_req(struct rpcrdma_msg **rdma_req,
struct svc_rqst *rqstp)
{
struct rpcrdma_msg *rmsgp = NULL;
- u32 *va;
- u32 *vaend;
+ __be32 *va, *vaend;
u32 hdr_len;

rmsgp = (struct rpcrdma_msg *)rqstp->rq_arg.head[0].iov_base;
@@ -162,22 +161,17 @@ int svc_rdma_xdr_decode_req(struct rpcrdma_msg **rdma_req,
return -EINVAL;
}

- /* Decode the header */
- rmsgp->rm_xid = ntohl(rmsgp->rm_xid);
- rmsgp->rm_vers = ntohl(rmsgp->rm_vers);
- rmsgp->rm_credit = ntohl(rmsgp->rm_credit);
- rmsgp->rm_type = ntohl(rmsgp->rm_type);
-
- if (rmsgp->rm_vers != RPCRDMA_VERSION)
+ if (rmsgp->rm_vers != rpcrdma_version)
return -ENOSYS;

/* Pull in the extra for the padded case and bump our pointer */
- if (rmsgp->rm_type == RDMA_MSGP) {
+ if (rmsgp->rm_type == rdma_msgp) {
int hdrlen;
+
rmsgp->rm_body.rm_padded.rm_align =
- ntohl(rmsgp->rm_body.rm_padded.rm_align);
+ be32_to_cpu(rmsgp->rm_body.rm_padded.rm_align);
rmsgp->rm_body.rm_padded.rm_thresh =
- ntohl(rmsgp->rm_body.rm_padded.rm_thresh);
+ be32_to_cpu(rmsgp->rm_body.rm_padded.rm_thresh);

va = &rmsgp->rm_body.rm_padded.rm_pempty[4];
rqstp->rq_arg.head[0].iov_base = va;
@@ -192,7 +186,7 @@ int svc_rdma_xdr_decode_req(struct rpcrdma_msg **rdma_req,
* chunk list and a reply chunk list.
*/
va = &rmsgp->rm_body.rm_chunks[0];
- vaend = (u32 *)((unsigned long)rmsgp + rqstp->rq_arg.len);
+ vaend = (__be32 *)((unsigned long)rmsgp + rqstp->rq_arg.len);
va = decode_read_list(va, vaend);
if (!va)
return -EINVAL;
@@ -213,18 +207,18 @@ int svc_rdma_xdr_decode_req(struct rpcrdma_msg **rdma_req,

int svc_rdma_xdr_encode_error(struct svcxprt_rdma *xprt,
struct rpcrdma_msg *rmsgp,
- enum rpcrdma_errcode err, u32 *va)
+ enum rpcrdma_errcode err, __be32 *va)
{
- u32 *startp = va;
+ __be32 *startp = va;

- *va++ = htonl(rmsgp->rm_xid);
- *va++ = htonl(rmsgp->rm_vers);
- *va++ = htonl(xprt->sc_max_requests);
- *va++ = htonl(RDMA_ERROR);
- *va++ = htonl(err);
+ *va++ = rmsgp->rm_xid;
+ *va++ = rmsgp->rm_vers;
+ *va++ = cpu_to_be32(xprt->sc_max_requests);
+ *va++ = rdma_error;
+ *va++ = cpu_to_be32(err);
if (err == ERR_VERS) {
- *va++ = htonl(RPCRDMA_VERSION);
- *va++ = htonl(RPCRDMA_VERSION);
+ *va++ = rpcrdma_version;
+ *va++ = rpcrdma_version;
}

return (int)((unsigned long)va - (unsigned long)startp);
@@ -241,7 +235,7 @@ int svc_rdma_xdr_get_reply_hdr_len(struct rpcrdma_msg *rmsgp)
&rmsgp->rm_body.rm_chunks[1];
if (wr_ary->wc_discrim)
wr_ary = (struct rpcrdma_write_array *)
- &wr_ary->wc_array[ntohl(wr_ary->wc_nchunks)].
+ &wr_ary->wc_array[be32_to_cpu(wr_ary->wc_nchunks)].
wc_target.rs_length;
else
wr_ary = (struct rpcrdma_write_array *)
@@ -250,7 +244,7 @@ int svc_rdma_xdr_get_reply_hdr_len(struct rpcrdma_msg *rmsgp)
/* skip reply array */
if (wr_ary->wc_discrim)
wr_ary = (struct rpcrdma_write_array *)
- &wr_ary->wc_array[ntohl(wr_ary->wc_nchunks)];
+ &wr_ary->wc_array[be32_to_cpu(wr_ary->wc_nchunks)];
else
wr_ary = (struct rpcrdma_write_array *)
&wr_ary->wc_nchunks;
@@ -269,7 +263,7 @@ void svc_rdma_xdr_encode_write_list(struct rpcrdma_msg *rmsgp, int chunks)
ary = (struct rpcrdma_write_array *)
&rmsgp->rm_body.rm_chunks[1];
ary->wc_discrim = xdr_one;
- ary->wc_nchunks = htonl(chunks);
+ ary->wc_nchunks = cpu_to_be32(chunks);

/* write-list terminator */
ary->wc_array[chunks].wc_target.rs_handle = xdr_zero;
@@ -282,7 +276,7 @@ void svc_rdma_xdr_encode_reply_array(struct rpcrdma_write_array *ary,
int chunks)
{
ary->wc_discrim = xdr_one;
- ary->wc_nchunks = htonl(chunks);
+ ary->wc_nchunks = cpu_to_be32(chunks);
}

void svc_rdma_xdr_encode_array_chunk(struct rpcrdma_write_array *ary,
@@ -294,7 +288,7 @@ void svc_rdma_xdr_encode_array_chunk(struct rpcrdma_write_array *ary,
struct rpcrdma_segment *seg = &ary->wc_array[chunk_no].wc_target;
seg->rs_handle = rs_handle;
seg->rs_offset = rs_offset;
- seg->rs_length = htonl(write_len);
+ seg->rs_length = cpu_to_be32(write_len);
}

void svc_rdma_xdr_encode_reply_header(struct svcxprt_rdma *xprt,
@@ -302,10 +296,10 @@ void svc_rdma_xdr_encode_reply_header(struct svcxprt_rdma *xprt,
struct rpcrdma_msg *rdma_resp,
enum rpcrdma_proc rdma_type)
{
- rdma_resp->rm_xid = htonl(rdma_argp->rm_xid);
- rdma_resp->rm_vers = htonl(rdma_argp->rm_vers);
- rdma_resp->rm_credit = htonl(xprt->sc_max_requests);
- rdma_resp->rm_type = htonl(rdma_type);
+ rdma_resp->rm_xid = rdma_argp->rm_xid;
+ rdma_resp->rm_vers = rdma_argp->rm_vers;
+ rdma_resp->rm_credit = cpu_to_be32(xprt->sc_max_requests);
+ rdma_resp->rm_type = cpu_to_be32(rdma_type);

/* Encode <nul> chunks lists */
rdma_resp->rm_body.rm_chunks[0] = xdr_zero;
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
index f9f13a3..ac93ce0 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -85,7 +85,7 @@ static void rdma_build_arg_xdr(struct svc_rqst *rqstp,

/* RDMA_NOMSG: RDMA READ data should land just after RDMA RECV data */
rmsgp = (struct rpcrdma_msg *)rqstp->rq_arg.head[0].iov_base;
- if (be32_to_cpu(rmsgp->rm_type) == RDMA_NOMSG)
+ if (rmsgp->rm_type == rdma_nomsg)
rqstp->rq_arg.pages = &rqstp->rq_pages[0];
else
rqstp->rq_arg.pages = &rqstp->rq_pages[1];
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index f609c1c..be08493 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -1319,7 +1319,7 @@ void svc_rdma_send_error(struct svcxprt_rdma *xprt, struct rpcrdma_msg *rmsgp,
struct ib_send_wr err_wr;
struct page *p;
struct svc_rdma_op_ctxt *ctxt;
- u32 *va;
+ __be32 *va;
int length;
int ret;



2015-05-26 17:49:29

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 06/10] svcrdma: Replace GFP_KERNEL in a loop with GFP_NOFAIL

At the 2015 LSF/MM, it was requested that memory allocation
call sites that request GFP_KERNEL allocations in a loop should be
annotated with __GFP_NOFAIL.

Signed-off-by: Chuck Lever <[email protected]>
---

include/linux/sunrpc/svc_rdma.h | 1 -
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 2 +-
net/sunrpc/xprtrdma/svc_rdma_transport.c | 32 ++++++------------------------
3 files changed, 7 insertions(+), 28 deletions(-)

diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index c03ca0a..d26384b 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -211,7 +211,6 @@ extern int svc_rdma_sendto(struct svc_rqst *);
extern int svc_rdma_send(struct svcxprt_rdma *, struct ib_send_wr *);
extern void svc_rdma_send_error(struct svcxprt_rdma *, struct rpcrdma_msg *,
enum rpcrdma_errcode);
-struct page *svc_rdma_get_page(void);
extern int svc_rdma_post_recv(struct svcxprt_rdma *);
extern int svc_rdma_create_listen(struct svc_serv *, int, struct sockaddr *);
extern struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *);
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 109e967..d25cd43 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -517,7 +517,7 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
inline_bytes = rqstp->rq_res.len;

/* Create the RDMA response header */
- res_page = svc_rdma_get_page();
+ res_page = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
rdma_resp = page_address(res_page);
reply_ary = svc_rdma_get_reply_array(rdma_argp);
if (reply_ary)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index be08493..1ed4740 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -99,12 +99,8 @@ struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
{
struct svc_rdma_op_ctxt *ctxt;

- while (1) {
- ctxt = kmem_cache_alloc(svc_rdma_ctxt_cachep, GFP_KERNEL);
- if (ctxt)
- break;
- schedule_timeout_uninterruptible(msecs_to_jiffies(500));
- }
+ ctxt = kmem_cache_alloc(svc_rdma_ctxt_cachep,
+ GFP_KERNEL | __GFP_NOFAIL);
ctxt->xprt = xprt;
INIT_LIST_HEAD(&ctxt->dto_q);
ctxt->count = 0;
@@ -156,12 +152,8 @@ void svc_rdma_put_context(struct svc_rdma_op_ctxt *ctxt, int free_pages)
struct svc_rdma_req_map *svc_rdma_get_req_map(void)
{
struct svc_rdma_req_map *map;
- while (1) {
- map = kmem_cache_alloc(svc_rdma_map_cachep, GFP_KERNEL);
- if (map)
- break;
- schedule_timeout_uninterruptible(msecs_to_jiffies(500));
- }
+ map = kmem_cache_alloc(svc_rdma_map_cachep,
+ GFP_KERNEL | __GFP_NOFAIL);
map->count = 0;
return map;
}
@@ -490,18 +482,6 @@ static struct svcxprt_rdma *rdma_create_xprt(struct svc_serv *serv,
return cma_xprt;
}

-struct page *svc_rdma_get_page(void)
-{
- struct page *page;
-
- while ((page = alloc_page(GFP_KERNEL)) == NULL) {
- /* If we can't get memory, wait a bit and try again */
- printk(KERN_INFO "svcrdma: out of memory...retrying in 1s\n");
- schedule_timeout_uninterruptible(msecs_to_jiffies(1000));
- }
- return page;
-}
-
int svc_rdma_post_recv(struct svcxprt_rdma *xprt)
{
struct ib_recv_wr recv_wr, *bad_recv_wr;
@@ -520,7 +500,7 @@ int svc_rdma_post_recv(struct svcxprt_rdma *xprt)
pr_err("svcrdma: Too many sges (%d)\n", sge_no);
goto err_put_ctxt;
}
- page = svc_rdma_get_page();
+ page = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
ctxt->pages[sge_no] = page;
pa = ib_dma_map_page(xprt->sc_cm_id->device,
page, 0, PAGE_SIZE,
@@ -1323,7 +1303,7 @@ void svc_rdma_send_error(struct svcxprt_rdma *xprt, struct rpcrdma_msg *rmsgp,
int length;
int ret;

- p = svc_rdma_get_page();
+ p = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
va = page_address(p);

/* XDR encode error */


2015-05-26 17:49:39

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 07/10] svcrdma: Add a separate "max data segs macro for svcrdma

The server and client maximum are architecturally independent.
Allow changing one without affecting the other.

Signed-off-by: Chuck Lever <[email protected]>
---

include/linux/sunrpc/svc_rdma.h | 7 +++++++
net/sunrpc/xprtrdma/svc_rdma_transport.c | 2 +-
net/sunrpc/xprtrdma/xprt_rdma.h | 6 ------
3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index d26384b..cb94ee4 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -172,6 +172,13 @@ struct svcxprt_rdma {
#define RDMAXPRT_SQ_PENDING 2
#define RDMAXPRT_CONN_PENDING 3

+#define RPCRDMA_MAX_SVC_SEGS (64) /* server max scatter/gather */
+#if RPCSVC_MAXPAYLOAD < (RPCRDMA_MAX_SVC_SEGS << PAGE_SHIFT)
+#define RPCRDMA_MAXPAYLOAD RPCSVC_MAXPAYLOAD
+#else
+#define RPCRDMA_MAXPAYLOAD (RPCRDMA_MAX_SVC_SEGS << PAGE_SHIFT)
+#endif
+
#define RPCRDMA_LISTEN_BACKLOG 10
/* The default ORD value is based on two outstanding full-size writes with a
* page size of 4k, or 32k * 2 ops / 4k = 16 outstanding RDMA_READ. */
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 1ed4740..3b4c2ff 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -91,7 +91,7 @@ struct svc_xprt_class svc_rdma_class = {
.xcl_name = "rdma",
.xcl_owner = THIS_MODULE,
.xcl_ops = &svc_rdma_ops,
- .xcl_max_payload = RPCSVC_MAXPAYLOAD_RDMA,
+ .xcl_max_payload = RPCRDMA_MAXPAYLOAD,
.xcl_ident = XPRT_TRANSPORT_RDMA,
};

diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 78e0b8b..e60907b 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -487,10 +487,4 @@ extern struct kmem_cache *svc_rdma_ctxt_cachep;
/* Workqueue created in svc_rdma.c */
extern struct workqueue_struct *svc_rdma_wq;

-#if RPCSVC_MAXPAYLOAD < (RPCRDMA_MAX_DATA_SEGS << PAGE_SHIFT)
-#define RPCSVC_MAXPAYLOAD_RDMA RPCSVC_MAXPAYLOAD
-#else
-#define RPCSVC_MAXPAYLOAD_RDMA (RPCRDMA_MAX_DATA_SEGS << PAGE_SHIFT)
-#endif
-
#endif /* _LINUX_SUNRPC_XPRT_RDMA_H */


2015-05-26 17:49:48

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 08/10] svcrdma: Add backward direction service for RPC/RDMA transport

Introduce some pre-requisite infrastructure needed for handling
RPC/RDMA bi-direction on the client side.

On NFSv4.1 mount points, the client uses this transport endpoint to
receive backward direction calls and route replies back to the
NFS server.

Signed-off-by: Chuck Lever <[email protected]>
---

include/linux/sunrpc/svc_rdma.h | 6 +++
include/linux/sunrpc/xprt.h | 1 +
net/sunrpc/xprtrdma/svc_rdma.c | 6 +++
net/sunrpc/xprtrdma/svc_rdma_transport.c | 59 ++++++++++++++++++++++++++++++
4 files changed, 71 insertions(+), 1 deletions(-)

diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index cb94ee4..0e7234d 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -231,9 +231,13 @@ extern void svc_rdma_put_frmr(struct svcxprt_rdma *,
struct svc_rdma_fastreg_mr *);
extern void svc_sq_reap(struct svcxprt_rdma *);
extern void svc_rq_reap(struct svcxprt_rdma *);
-extern struct svc_xprt_class svc_rdma_class;
extern void svc_rdma_prep_reply_hdr(struct svc_rqst *);

+extern struct svc_xprt_class svc_rdma_class;
+#ifdef CONFIG_SUNRPC_BACKCHANNEL
+extern struct svc_xprt_class svc_rdma_bc_class;
+#endif
+
/* svc_rdma.c */
extern int svc_rdma_init(void);
extern void svc_rdma_cleanup(void);
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index 8b93ef5..693f9f1 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -150,6 +150,7 @@ enum xprt_transports {
XPRT_TRANSPORT_TCP = IPPROTO_TCP,
XPRT_TRANSPORT_BC_TCP = IPPROTO_TCP | XPRT_TRANSPORT_BC,
XPRT_TRANSPORT_RDMA = 256,
+ XPRT_TRANSPORT_BC_RDMA = XPRT_TRANSPORT_RDMA | XPRT_TRANSPORT_BC,
XPRT_TRANSPORT_LOCAL = 257,
};

diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
index 8eedb60..7a18ae4 100644
--- a/net/sunrpc/xprtrdma/svc_rdma.c
+++ b/net/sunrpc/xprtrdma/svc_rdma.c
@@ -244,6 +244,9 @@ void svc_rdma_cleanup(void)
unregister_sysctl_table(svcrdma_table_header);
svcrdma_table_header = NULL;
}
+#ifdef CONFIG_SUNRPC_BACKCHANNEL
+ svc_unreg_xprt_class(&svc_rdma_bc_class);
+#endif
svc_unreg_xprt_class(&svc_rdma_class);
kmem_cache_destroy(svc_rdma_map_cachep);
kmem_cache_destroy(svc_rdma_ctxt_cachep);
@@ -291,6 +294,9 @@ int svc_rdma_init(void)

/* Register RDMA with the SVC transport switch */
svc_reg_xprt_class(&svc_rdma_class);
+#ifdef CONFIG_SUNRPC_BACKCHANNEL
+ svc_reg_xprt_class(&svc_rdma_bc_class);
+#endif
return 0;
err1:
kmem_cache_destroy(svc_rdma_map_cachep);
diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 3b4c2ff..9b8bccd 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -56,6 +56,7 @@

#define RPCDBG_FACILITY RPCDBG_SVCXPRT

+static struct svcxprt_rdma *rdma_create_xprt(struct svc_serv *, int);
static struct svc_xprt *svc_rdma_create(struct svc_serv *serv,
struct net *net,
struct sockaddr *sa, int salen,
@@ -95,6 +96,64 @@ struct svc_xprt_class svc_rdma_class = {
.xcl_ident = XPRT_TRANSPORT_RDMA,
};

+#if defined(CONFIG_SUNRPC_BACKCHANNEL)
+static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *, struct net *,
+ struct sockaddr *, int, int);
+static void svc_rdma_bc_detach(struct svc_xprt *);
+static void svc_rdma_bc_free(struct svc_xprt *);
+
+static struct svc_xprt_ops svc_rdma_bc_ops = {
+ .xpo_create = svc_rdma_bc_create,
+ .xpo_detach = svc_rdma_bc_detach,
+ .xpo_free = svc_rdma_bc_free,
+ .xpo_prep_reply_hdr = svc_rdma_prep_reply_hdr,
+ .xpo_secure_port = svc_rdma_secure_port,
+};
+
+struct svc_xprt_class svc_rdma_bc_class = {
+ .xcl_name = "rdma-bc",
+ .xcl_owner = THIS_MODULE,
+ .xcl_ops = &svc_rdma_bc_ops,
+ .xcl_max_payload = (1024 - RPCRDMA_HDRLEN_MIN),
+ .xcl_ident = XPRT_TRANSPORT_BC_RDMA,
+};
+
+static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *serv,
+ struct net *net,
+ struct sockaddr *sa, int salen,
+ int flags)
+{
+ struct svcxprt_rdma *cma_xprt;
+ struct svc_xprt *xprt;
+
+ cma_xprt = rdma_create_xprt(serv, 0);
+ if (!cma_xprt)
+ return ERR_PTR(-ENOMEM);
+ xprt = &cma_xprt->sc_xprt;
+
+ svc_xprt_init(net, &svc_rdma_bc_class, xprt, serv);
+ serv->sv_bc_xprt = xprt;
+
+ dprintk("svcrdma: %s(%p)\n", __func__, xprt);
+ return xprt;
+}
+
+static void svc_rdma_bc_detach(struct svc_xprt *xprt)
+{
+ dprintk("svcrdma: %s(%p)\n", __func__, xprt);
+}
+
+static void svc_rdma_bc_free(struct svc_xprt *xprt)
+{
+ struct svcxprt_rdma *rdma =
+ container_of(xprt, struct svcxprt_rdma, sc_xprt);
+
+ dprintk("svcrdma: %s(%p)\n", __func__, xprt);
+ if (xprt)
+ kfree(rdma);
+}
+#endif /* CONFIG_SUNRPC_BACKCHANNEL */
+
struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
{
struct svc_rdma_op_ctxt *ctxt;


2015-05-26 17:49:58

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 09/10] SUNRPC: Clean up bc_send()

Clean up: Merge bc_send() into bc_svc_process().

Note: even though this touches svc.c, it is a client-side change.

Signed-off-by: Chuck Lever <[email protected]>
---

include/linux/sunrpc/bc_xprt.h | 1 -
net/sunrpc/Makefile | 2 +
net/sunrpc/bc_svc.c | 63 ----------------------------------------
net/sunrpc/svc.c | 33 ++++++++++++++++-----
4 files changed, 26 insertions(+), 73 deletions(-)
delete mode 100644 net/sunrpc/bc_svc.c

diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
index 2ca67b5..8df43c9 100644
--- a/include/linux/sunrpc/bc_xprt.h
+++ b/include/linux/sunrpc/bc_xprt.h
@@ -37,7 +37,6 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied);
void xprt_free_bc_request(struct rpc_rqst *req);
int xprt_setup_backchannel(struct rpc_xprt *, unsigned int min_reqs);
void xprt_destroy_backchannel(struct rpc_xprt *, unsigned int max_reqs);
-int bc_send(struct rpc_rqst *req);

/*
* Determine if a shared backchannel is in use
diff --git a/net/sunrpc/Makefile b/net/sunrpc/Makefile
index 15e6f6c..1b8e68d 100644
--- a/net/sunrpc/Makefile
+++ b/net/sunrpc/Makefile
@@ -15,6 +15,6 @@ sunrpc-y := clnt.o xprt.o socklib.o xprtsock.o sched.o \
sunrpc_syms.o cache.o rpc_pipe.o \
svc_xprt.o
sunrpc-$(CONFIG_SUNRPC_DEBUG) += debugfs.o
-sunrpc-$(CONFIG_SUNRPC_BACKCHANNEL) += backchannel_rqst.o bc_svc.o
+sunrpc-$(CONFIG_SUNRPC_BACKCHANNEL) += backchannel_rqst.o
sunrpc-$(CONFIG_PROC_FS) += stats.o
sunrpc-$(CONFIG_SYSCTL) += sysctl.o
diff --git a/net/sunrpc/bc_svc.c b/net/sunrpc/bc_svc.c
deleted file mode 100644
index 15c7a8a..0000000
--- a/net/sunrpc/bc_svc.c
+++ /dev/null
@@ -1,63 +0,0 @@
-/******************************************************************************
-
-(c) 2007 Network Appliance, Inc. All Rights Reserved.
-(c) 2009 NetApp. All Rights Reserved.
-
-NetApp provides this source code under the GPL v2 License.
-The GPL v2 license is available at
-http://opensource.org/licenses/gpl-license.php.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
-CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
-EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
-PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
-PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
-LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
-NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-******************************************************************************/
-
-/*
- * The NFSv4.1 callback service helper routines.
- * They implement the transport level processing required to send the
- * reply over an existing open connection previously established by the client.
- */
-
-#include <linux/module.h>
-
-#include <linux/sunrpc/xprt.h>
-#include <linux/sunrpc/sched.h>
-#include <linux/sunrpc/bc_xprt.h>
-
-#define RPCDBG_FACILITY RPCDBG_SVCDSP
-
-/* Empty callback ops */
-static const struct rpc_call_ops nfs41_callback_ops = {
-};
-
-
-/*
- * Send the callback reply
- */
-int bc_send(struct rpc_rqst *req)
-{
- struct rpc_task *task;
- int ret;
-
- dprintk("RPC: bc_send req= %p\n", req);
- task = rpc_run_bc_task(req, &nfs41_callback_ops);
- if (IS_ERR(task))
- ret = PTR_ERR(task);
- else {
- WARN_ON_ONCE(atomic_read(&task->tk_count) != 1);
- ret = task->tk_status;
- rpc_put_task(task);
- }
- dprintk("RPC: bc_send ret= %d\n", ret);
- return ret;
-}
-
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 852ae60..f86b7be 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -1350,6 +1350,11 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
{
struct kvec *argv = &rqstp->rq_arg.head[0];
struct kvec *resv = &rqstp->rq_res.head[0];
+ static const struct rpc_call_ops reply_ops = { };
+ struct rpc_task *task;
+ int error;
+
+ dprintk("svc: %s(%p)\n", __func__, req);

/* Build the svc_rqst used by the common processing routine */
rqstp->rq_xprt = serv->sv_bc_xprt;
@@ -1372,21 +1377,33 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,

/*
* Skip the next two words because they've already been
- * processed in the trasport
+ * processed in the transport
*/
svc_getu32(argv); /* XID */
svc_getnl(argv); /* CALLDIR */

- /* Returns 1 for send, 0 for drop */
- if (svc_process_common(rqstp, argv, resv)) {
- memcpy(&req->rq_snd_buf, &rqstp->rq_res,
- sizeof(req->rq_snd_buf));
- return bc_send(req);
- } else {
- /* drop request */
+ /* Parse and execute the bc call */
+ if (!svc_process_common(rqstp, argv, resv)) {
+ /* Processing error: drop the request */
xprt_free_bc_request(req);
return 0;
}
+
+ /* Finally, send the reply synchronously */
+ memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
+ task = rpc_run_bc_task(req, &reply_ops);
+ if (IS_ERR(task)) {
+ error = PTR_ERR(task);
+ goto out;
+ }
+
+ WARN_ON_ONCE(atomic_read(&task->tk_count) != 1);
+ error = task->tk_status;
+ rpc_put_task(task);
+
+out:
+ dprintk("svc: %s(), error=%d\n", __func__, error);
+ return error;
}
EXPORT_SYMBOL_GPL(bc_svc_process);
#endif /* CONFIG_SUNRPC_BACKCHANNEL */


2015-05-26 17:50:07

by Chuck Lever III

[permalink] [raw]
Subject: [PATCH v2 10/10] rpcrdma: Merge svcrdma and xprtrdma modules into one

Bi-directional RPC support means code in svcrdma.ko invokes a bit of
code in xprtrdma.ko, and vice versa. To avoid loader/linker loops,
merge the server and client side modules together into a single
module.

When backchannel capabilities are added, the combined module will
register all needed transport capabilities so that Upper Layer
consumers automatically have everything needed to create a
bi-directional transport connection.

Module aliases are added for backwards compatibility with user
space, which still may expect svcrdma.ko or xprtrdma.ko to be
present.

This commit reverts commit 2e8c12e1b765 ("xprtrdma: add separate
Kconfig options for NFSoRDMA client and server support") and
provides a single CONFIG option for enabling the new module.

Signed-off-by: Chuck Lever <[email protected]>
---

net/sunrpc/Kconfig | 28 +++++++-----------------
net/sunrpc/Makefile | 3 +--
net/sunrpc/xprtrdma/Makefile | 14 +++++-------
net/sunrpc/xprtrdma/module.c | 46 +++++++++++++++++++++++++++++++++++++++
net/sunrpc/xprtrdma/svc_rdma.c | 8 +------
net/sunrpc/xprtrdma/transport.c | 13 ++---------
net/sunrpc/xprtrdma/xprt_rdma.h | 5 ++++
7 files changed, 69 insertions(+), 48 deletions(-)
create mode 100644 net/sunrpc/xprtrdma/module.c

diff --git a/net/sunrpc/Kconfig b/net/sunrpc/Kconfig
index 9068e72..04ce2c0 100644
--- a/net/sunrpc/Kconfig
+++ b/net/sunrpc/Kconfig
@@ -48,28 +48,16 @@ config SUNRPC_DEBUG

If unsure, say Y.

-config SUNRPC_XPRT_RDMA_CLIENT
- tristate "RPC over RDMA Client Support"
+config SUNRPC_XPRT_RDMA
+ tristate "RPC-over-RDMA transport"
depends on SUNRPC && INFINIBAND && INFINIBAND_ADDR_TRANS
default SUNRPC && INFINIBAND
help
- This option allows the NFS client to support an RDMA-enabled
- transport.
+ This option allows the NFS client and server to use RDMA
+ transports (InfiniBand, iWARP, or RoCE).

- To compile RPC client RDMA transport support as a module,
- choose M here: the module will be called xprtrdma.
+ To compile this support as a module, choose M. The module
+ will be called rpcrdma.ko.

- If unsure, say N.
-
-config SUNRPC_XPRT_RDMA_SERVER
- tristate "RPC over RDMA Server Support"
- depends on SUNRPC && INFINIBAND && INFINIBAND_ADDR_TRANS
- default SUNRPC && INFINIBAND
- help
- This option allows the NFS server to support an RDMA-enabled
- transport.
-
- To compile RPC server RDMA transport support as a module,
- choose M here: the module will be called svcrdma.
-
- If unsure, say N.
+ If unsure, or you know there is no RDMA capability on your
+ hardware platform, say N.
diff --git a/net/sunrpc/Makefile b/net/sunrpc/Makefile
index 1b8e68d..b512fbd 100644
--- a/net/sunrpc/Makefile
+++ b/net/sunrpc/Makefile
@@ -5,8 +5,7 @@

obj-$(CONFIG_SUNRPC) += sunrpc.o
obj-$(CONFIG_SUNRPC_GSS) += auth_gss/
-
-obj-y += xprtrdma/
+obj-$(CONFIG_SUNRPC_XPRT_RDMA) += xprtrdma/

sunrpc-y := clnt.o xprt.o socklib.o xprtsock.o sched.o \
auth.o auth_null.o auth_unix.o auth_generic.o \
diff --git a/net/sunrpc/xprtrdma/Makefile b/net/sunrpc/xprtrdma/Makefile
index 579f72b..48913de 100644
--- a/net/sunrpc/xprtrdma/Makefile
+++ b/net/sunrpc/xprtrdma/Makefile
@@ -1,9 +1,7 @@
-obj-$(CONFIG_SUNRPC_XPRT_RDMA_CLIENT) += xprtrdma.o
+obj-$(CONFIG_SUNRPC_XPRT_RDMA) += rpcrdma.o

-xprtrdma-y := transport.o rpc_rdma.o verbs.o \
- fmr_ops.o frwr_ops.o physical_ops.o
-
-obj-$(CONFIG_SUNRPC_XPRT_RDMA_SERVER) += svcrdma.o
-
-svcrdma-y := svc_rdma.o svc_rdma_transport.o \
- svc_rdma_marshal.o svc_rdma_sendto.o svc_rdma_recvfrom.o
+rpcrdma-y := transport.o rpc_rdma.o verbs.o \
+ fmr_ops.o frwr_ops.o physical_ops.o \
+ svc_rdma.o svc_rdma_transport.o \
+ svc_rdma_marshal.o svc_rdma_sendto.o svc_rdma_recvfrom.o \
+ module.o
diff --git a/net/sunrpc/xprtrdma/module.c b/net/sunrpc/xprtrdma/module.c
new file mode 100644
index 0000000..560712b
--- /dev/null
+++ b/net/sunrpc/xprtrdma/module.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright (c) 2015 Oracle. All rights reserved.
+ */
+
+/* rpcrdma.ko module initialization
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/sunrpc/svc_rdma.h>
+#include "xprt_rdma.h"
+
+#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+# define RPCDBG_FACILITY RPCDBG_TRANS
+#endif
+
+MODULE_AUTHOR("Open Grid Computing and Network Appliance, Inc.");
+MODULE_DESCRIPTION("RPC/RDMA Transport");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_ALIAS("svcrdma");
+MODULE_ALIAS("xprtrdma");
+
+static void __exit rpc_rdma_cleanup(void)
+{
+ xprt_rdma_cleanup();
+ svc_rdma_cleanup();
+}
+
+static int __init rpc_rdma_init(void)
+{
+ int rc;
+
+ rc = svc_rdma_init();
+ if (rc)
+ goto out;
+
+ rc = xprt_rdma_init();
+ if (rc)
+ svc_rdma_cleanup();
+
+out:
+ return rc;
+}
+
+module_init(rpc_rdma_init);
+module_exit(rpc_rdma_cleanup);
diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
index 7a18ae4..dc7d7a5 100644
--- a/net/sunrpc/xprtrdma/svc_rdma.c
+++ b/net/sunrpc/xprtrdma/svc_rdma.c
@@ -38,8 +38,7 @@
*
* Author: Tom Tucker <[email protected]>
*/
-#include <linux/module.h>
-#include <linux/init.h>
+
#include <linux/slab.h>
#include <linux/fs.h>
#include <linux/sysctl.h>
@@ -305,8 +304,3 @@ int svc_rdma_init(void)
destroy_workqueue(svc_rdma_wq);
return -ENOMEM;
}
-MODULE_AUTHOR("Tom Tucker <[email protected]>");
-MODULE_DESCRIPTION("SVC RDMA Transport");
-MODULE_LICENSE("Dual BSD/GPL");
-module_init(svc_rdma_init);
-module_exit(svc_rdma_cleanup);
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 54f23b1..436da2c 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -48,7 +48,6 @@
*/

#include <linux/module.h>
-#include <linux/init.h>
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/sunrpc/addr.h>
@@ -59,11 +58,6 @@
# define RPCDBG_FACILITY RPCDBG_TRANS
#endif

-MODULE_LICENSE("Dual BSD/GPL");
-
-MODULE_DESCRIPTION("RPC/RDMA Transport for Linux kernel NFS");
-MODULE_AUTHOR("Network Appliance, Inc.");
-
/*
* tunables
*/
@@ -711,7 +705,7 @@ static struct xprt_class xprt_rdma = {
.setup = xprt_setup_rdma,
};

-static void __exit xprt_rdma_cleanup(void)
+void xprt_rdma_cleanup(void)
{
int rc;

@@ -728,7 +722,7 @@ static void __exit xprt_rdma_cleanup(void)
__func__, rc);
}

-static int __init xprt_rdma_init(void)
+int xprt_rdma_init(void)
{
int rc;

@@ -753,6 +747,3 @@ static int __init xprt_rdma_init(void)
#endif
return 0;
}
-
-module_init(xprt_rdma_init);
-module_exit(xprt_rdma_cleanup);
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index e60907b..58163b8 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -480,6 +480,11 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *);
*/
int rpcrdma_marshal_req(struct rpc_rqst *);

+/* RPC/RDMA module init - xprtrdma/transport.c
+ */
+int xprt_rdma_init(void);
+void xprt_rdma_cleanup(void);
+
/* Temporary NFS request map cache. Created in svc_rdma.c */
extern struct kmem_cache *svc_rdma_map_cachep;
/* WR context cache. Created in svc_rdma.c */


2015-06-01 18:40:48

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 01/10] svcrdma: Fix byte-swapping in svc_rdma_sendto.c

On Tue, May 26, 2015 at 01:48:37PM -0400, Chuck Lever wrote:
> In send_write_chunks(), we have:
>
> for (xdr_off = rqstp->rq_res.head[0].iov_len, chunk_no = 0;
> xfer_len && chunk_no < arg_ary->wc_nchunks;
> chunk_no++) {
> . . .
> }
>
> Note that arg_ary->wc_nchunk is in network byte-order. For the
> comparison to work correctly, both have to be in native byte-order.
>
> In send_reply_chunks, we have:
>
> write_len = min(xfer_len, htonl(ch->rs_length));
>
> xfer_len is in native byte-order, and ch->rs_length is in
> network byte-order. be32_to_cpu() is the correct byte swap
> for ch->rs_length.
>
> As an additional clean up, replace ntohl() with be32_to_cpu() in
> a few other places.

Why? (Not arguing, really, just wondering.)

--b.

>
> This appears to address a problem with large rsize hangs while
> using PHYSICAL memory registration. I suspect that is the only
> registration mode that uses more than one chunk element.
>
> BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=248
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> net/sunrpc/xprtrdma/svc_rdma_sendto.c | 14 ++++++++------
> 1 files changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
> index 7de33d1..109e967 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
> @@ -240,6 +240,7 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
> u32 xdr_off;
> int chunk_off;
> int chunk_no;
> + int nchunks;
> struct rpcrdma_write_array *arg_ary;
> struct rpcrdma_write_array *res_ary;
> int ret;
> @@ -251,14 +252,15 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
> &rdma_resp->rm_body.rm_chunks[1];
>
> /* Write chunks start at the pagelist */
> + nchunks = be32_to_cpu(arg_ary->wc_nchunks);
> for (xdr_off = rqstp->rq_res.head[0].iov_len, chunk_no = 0;
> - xfer_len && chunk_no < arg_ary->wc_nchunks;
> + xfer_len && chunk_no < nchunks;
> chunk_no++) {
> struct rpcrdma_segment *arg_ch;
> u64 rs_offset;
>
> arg_ch = &arg_ary->wc_array[chunk_no].wc_target;
> - write_len = min(xfer_len, ntohl(arg_ch->rs_length));
> + write_len = min(xfer_len, be32_to_cpu(arg_ch->rs_length));
>
> /* Prepare the response chunk given the length actually
> * written */
> @@ -270,7 +272,7 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
> chunk_off = 0;
> while (write_len) {
> ret = send_write(xprt, rqstp,
> - ntohl(arg_ch->rs_handle),
> + be32_to_cpu(arg_ch->rs_handle),
> rs_offset + chunk_off,
> xdr_off,
> write_len,
> @@ -318,13 +320,13 @@ static int send_reply_chunks(struct svcxprt_rdma *xprt,
> &rdma_resp->rm_body.rm_chunks[2];
>
> /* xdr offset starts at RPC message */
> - nchunks = ntohl(arg_ary->wc_nchunks);
> + nchunks = be32_to_cpu(arg_ary->wc_nchunks);
> for (xdr_off = 0, chunk_no = 0;
> xfer_len && chunk_no < nchunks;
> chunk_no++) {
> u64 rs_offset;
> ch = &arg_ary->wc_array[chunk_no].wc_target;
> - write_len = min(xfer_len, htonl(ch->rs_length));
> + write_len = min(xfer_len, be32_to_cpu(ch->rs_length));
>
> /* Prepare the reply chunk given the length actually
> * written */
> @@ -335,7 +337,7 @@ static int send_reply_chunks(struct svcxprt_rdma *xprt,
> chunk_off = 0;
> while (write_len) {
> ret = send_write(xprt, rqstp,
> - ntohl(ch->rs_handle),
> + be32_to_cpu(ch->rs_handle),
> rs_offset + chunk_off,
> xdr_off,
> write_len,

2015-06-01 18:45:13

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH v2 01/10] svcrdma: Fix byte-swapping in svc_rdma_sendto.c


On Jun 1, 2015, at 2:40 PM, J. Bruce Fields <[email protected]> wrote:

> On Tue, May 26, 2015 at 01:48:37PM -0400, Chuck Lever wrote:
>> In send_write_chunks(), we have:
>>
>> for (xdr_off = rqstp->rq_res.head[0].iov_len, chunk_no = 0;
>> xfer_len && chunk_no < arg_ary->wc_nchunks;
>> chunk_no++) {
>> . . .
>> }
>>
>> Note that arg_ary->wc_nchunk is in network byte-order. For the
>> comparison to work correctly, both have to be in native byte-order.
>>
>> In send_reply_chunks, we have:
>>
>> write_len = min(xfer_len, htonl(ch->rs_length));
>>
>> xfer_len is in native byte-order, and ch->rs_length is in
>> network byte-order. be32_to_cpu() is the correct byte swap
>> for ch->rs_length.
>>
>> As an additional clean up, replace ntohl() with be32_to_cpu() in
>> a few other places.
>
> Why? (Not arguing, really, just wondering.)

It?s just clean up to match the rest of the code. And it kind of marks
the places that have been reviewed.


> --b.
>
>>
>> This appears to address a problem with large rsize hangs while
>> using PHYSICAL memory registration. I suspect that is the only
>> registration mode that uses more than one chunk element.
>>
>> BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=248
>> Signed-off-by: Chuck Lever <[email protected]>
>> ---
>>
>> net/sunrpc/xprtrdma/svc_rdma_sendto.c | 14 ++++++++------
>> 1 files changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
>> index 7de33d1..109e967 100644
>> --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
>> +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
>> @@ -240,6 +240,7 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
>> u32 xdr_off;
>> int chunk_off;
>> int chunk_no;
>> + int nchunks;
>> struct rpcrdma_write_array *arg_ary;
>> struct rpcrdma_write_array *res_ary;
>> int ret;
>> @@ -251,14 +252,15 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
>> &rdma_resp->rm_body.rm_chunks[1];
>>
>> /* Write chunks start at the pagelist */
>> + nchunks = be32_to_cpu(arg_ary->wc_nchunks);
>> for (xdr_off = rqstp->rq_res.head[0].iov_len, chunk_no = 0;
>> - xfer_len && chunk_no < arg_ary->wc_nchunks;
>> + xfer_len && chunk_no < nchunks;
>> chunk_no++) {
>> struct rpcrdma_segment *arg_ch;
>> u64 rs_offset;
>>
>> arg_ch = &arg_ary->wc_array[chunk_no].wc_target;
>> - write_len = min(xfer_len, ntohl(arg_ch->rs_length));
>> + write_len = min(xfer_len, be32_to_cpu(arg_ch->rs_length));
>>
>> /* Prepare the response chunk given the length actually
>> * written */
>> @@ -270,7 +272,7 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
>> chunk_off = 0;
>> while (write_len) {
>> ret = send_write(xprt, rqstp,
>> - ntohl(arg_ch->rs_handle),
>> + be32_to_cpu(arg_ch->rs_handle),
>> rs_offset + chunk_off,
>> xdr_off,
>> write_len,
>> @@ -318,13 +320,13 @@ static int send_reply_chunks(struct svcxprt_rdma *xprt,
>> &rdma_resp->rm_body.rm_chunks[2];
>>
>> /* xdr offset starts at RPC message */
>> - nchunks = ntohl(arg_ary->wc_nchunks);
>> + nchunks = be32_to_cpu(arg_ary->wc_nchunks);
>> for (xdr_off = 0, chunk_no = 0;
>> xfer_len && chunk_no < nchunks;
>> chunk_no++) {
>> u64 rs_offset;
>> ch = &arg_ary->wc_array[chunk_no].wc_target;
>> - write_len = min(xfer_len, htonl(ch->rs_length));
>> + write_len = min(xfer_len, be32_to_cpu(ch->rs_length));
>>
>> /* Prepare the reply chunk given the length actually
>> * written */
>> @@ -335,7 +337,7 @@ static int send_reply_chunks(struct svcxprt_rdma *xprt,
>> chunk_off = 0;
>> while (write_len) {
>> ret = send_write(xprt, rqstp,
>> - ntohl(ch->rs_handle),
>> + be32_to_cpu(ch->rs_handle),
>> rs_offset + chunk_off,
>> xdr_off,
>> write_len,

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2015-06-01 19:31:03

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 02/10] svcrdma: Add missing access_ok() call in svc_rdma.c

On Tue, May 26, 2015 at 01:48:47PM -0400, Chuck Lever wrote:
> Ensure a proper memory access check is done by read_reset_stat(),
> then fix the following compiler warning.
>
> In file included from linux-2.6/include/net/checksum.h:25,
> from linux-2.6/include/linux/skbuff.h:31,
> from linux-2.6/include/linux/icmpv6.h:4,
> from linux-2.6/include/linux/ipv6.h:64,
> from linux-2.6/include/net/ipv6.h:16,
> from linux-2.6/include/linux/sunrpc/clnt.h:27,
> from linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:47:
> In function ‘copy_to_user’,
> inlined from ‘read_reset_stat’ at
> linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:113:
> linux-2.6/arch/x86/include/asm/uaccess.h:735: warning:
> call to ‘__copy_to_user_overflow’ declared with attribute warning:
> copy_to_user() buffer size is not provably correct

How do you get that warning? I can't hit it even with
CONFIG_USER_STRICT_USER_COPY_CHECKS set. Based on comments in arch/x86
I would have thought this would only trigger when len was a constant.

--b.

>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> net/sunrpc/xprtrdma/svc_rdma.c | 8 ++++++--
> 1 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
> index c1b6270..8eedb60 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
> @@ -98,7 +98,11 @@ static int read_reset_stat(struct ctl_table *table, int write,
> else {
> char str_buf[32];
> char *data;
> - int len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
> + int len;
> +
> + if (!access_ok(VERIFY_WRITE, buffer, *lenp))
> + return -EFAULT;
> + len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
> if (len >= 32)
> return -EFAULT;
> len = strlen(str_buf);
> @@ -110,7 +114,7 @@ static int read_reset_stat(struct ctl_table *table, int write,
> len -= *ppos;
> if (len > *lenp)
> len = *lenp;
> - if (len && copy_to_user(buffer, str_buf, len))
> + if (len && __copy_to_user(buffer, str_buf, len))
> return -EFAULT;
> *lenp = len;
> *ppos += len;

2015-06-01 20:00:10

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH v2 02/10] svcrdma: Add missing access_ok() call in svc_rdma.c


On Jun 1, 2015, at 3:31 PM, J. Bruce Fields <[email protected]> wrote:

> On Tue, May 26, 2015 at 01:48:47PM -0400, Chuck Lever wrote:
>> Ensure a proper memory access check is done by read_reset_stat(),
>> then fix the following compiler warning.
>>
>> In file included from linux-2.6/include/net/checksum.h:25,
>> from linux-2.6/include/linux/skbuff.h:31,
>> from linux-2.6/include/linux/icmpv6.h:4,
>> from linux-2.6/include/linux/ipv6.h:64,
>> from linux-2.6/include/net/ipv6.h:16,
>> from linux-2.6/include/linux/sunrpc/clnt.h:27,
>> from linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:47:
>> In function ?copy_to_user?,
>> inlined from ?read_reset_stat? at
>> linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:113:
>> linux-2.6/arch/x86/include/asm/uaccess.h:735: warning:
>> call to ?__copy_to_user_overflow? declared with attribute warning:
>> copy_to_user() buffer size is not provably correct
>
> How do you get that warning? I can't hit it even with
> CONFIG_USER_STRICT_USER_COPY_CHECKS set. Based on comments in arch/x86
> I would have thought this would only trigger when len was a constant.

I only seem to see this warning when building and testing on EL6, gcc
version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC).

include/linux/compiler-gcc4.h has this:

16 #if GCC_VERSION >= 40100 && GCC_VERSION < 40600
17 # define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
18 #endif

and include/linux/compiler.h has this:

384 #ifndef __compiletime_object_size
385 # define __compiletime_object_size(obj) -1
386 #endif

Now, arch/x86/include/asm/uaccess.h spells copy_to_user() this way:

722 static inline unsigned long __must_check
723 copy_to_user(void __user *to, const void *from, unsigned long n)
724 {
725 int sz = __compiletime_object_size(from);
726
727 might_fault();
728
729 /* See the comment in copy_from_user() above. */
730 if (likely(sz < 0 || sz >= n))
731 n = _copy_to_user(to, from, n);
732 else if(__builtin_constant_p(n))
733 copy_to_user_overflow();
734 else
735 __copy_to_user_overflow(sz, n);
736
737 return n;
738 }

If __compiletime_object_size isn?t defined for your compiler version, then
int sz is set to -1, and _copy_to_user() is invoked directly. Otherwise
if n is a variable, the warning pops.

This might be a false positive. The ?from? parameter is always 32 bytes in
read_reset_stat().

I think adding an access_ok() call here is reasonable, though?

> --b.
>
>>
>> Signed-off-by: Chuck Lever <[email protected]>
>> ---
>>
>> net/sunrpc/xprtrdma/svc_rdma.c | 8 ++++++--
>> 1 files changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
>> index c1b6270..8eedb60 100644
>> --- a/net/sunrpc/xprtrdma/svc_rdma.c
>> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
>> @@ -98,7 +98,11 @@ static int read_reset_stat(struct ctl_table *table, int write,
>> else {
>> char str_buf[32];
>> char *data;
>> - int len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
>> + int len;
>> +
>> + if (!access_ok(VERIFY_WRITE, buffer, *lenp))
>> + return -EFAULT;
>> + len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
>> if (len >= 32)
>> return -EFAULT;
>> len = strlen(str_buf);
>> @@ -110,7 +114,7 @@ static int read_reset_stat(struct ctl_table *table, int write,
>> len -= *ppos;
>> if (len > *lenp)
>> len = *lenp;
>> - if (len && copy_to_user(buffer, str_buf, len))
>> + if (len && __copy_to_user(buffer, str_buf, len))
>> return -EFAULT;
>> *lenp = len;
>> *ppos += len;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2015-06-01 20:17:56

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] SUNRPC: Clean up bc_send()


On May 26, 2015, at 1:49 PM, Chuck Lever <[email protected]> wrote:

> Clean up: Merge bc_send() into bc_svc_process().
>
> Note: even though this touches svc.c, it is a client-side change.

I think Trond is taking this one. You can drop it. Sorry for the noise.


> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> include/linux/sunrpc/bc_xprt.h | 1 -
> net/sunrpc/Makefile | 2 +
> net/sunrpc/bc_svc.c | 63 ----------------------------------------
> net/sunrpc/svc.c | 33 ++++++++++++++++-----
> 4 files changed, 26 insertions(+), 73 deletions(-)
> delete mode 100644 net/sunrpc/bc_svc.c
>
> diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
> index 2ca67b5..8df43c9 100644
> --- a/include/linux/sunrpc/bc_xprt.h
> +++ b/include/linux/sunrpc/bc_xprt.h
> @@ -37,7 +37,6 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied);
> void xprt_free_bc_request(struct rpc_rqst *req);
> int xprt_setup_backchannel(struct rpc_xprt *, unsigned int min_reqs);
> void xprt_destroy_backchannel(struct rpc_xprt *, unsigned int max_reqs);
> -int bc_send(struct rpc_rqst *req);
>
> /*
> * Determine if a shared backchannel is in use
> diff --git a/net/sunrpc/Makefile b/net/sunrpc/Makefile
> index 15e6f6c..1b8e68d 100644
> --- a/net/sunrpc/Makefile
> +++ b/net/sunrpc/Makefile
> @@ -15,6 +15,6 @@ sunrpc-y := clnt.o xprt.o socklib.o xprtsock.o sched.o \
> sunrpc_syms.o cache.o rpc_pipe.o \
> svc_xprt.o
> sunrpc-$(CONFIG_SUNRPC_DEBUG) += debugfs.o
> -sunrpc-$(CONFIG_SUNRPC_BACKCHANNEL) += backchannel_rqst.o bc_svc.o
> +sunrpc-$(CONFIG_SUNRPC_BACKCHANNEL) += backchannel_rqst.o
> sunrpc-$(CONFIG_PROC_FS) += stats.o
> sunrpc-$(CONFIG_SYSCTL) += sysctl.o
> diff --git a/net/sunrpc/bc_svc.c b/net/sunrpc/bc_svc.c
> deleted file mode 100644
> index 15c7a8a..0000000
> --- a/net/sunrpc/bc_svc.c
> +++ /dev/null
> @@ -1,63 +0,0 @@
> -/******************************************************************************
> -
> -(c) 2007 Network Appliance, Inc. All Rights Reserved.
> -(c) 2009 NetApp. All Rights Reserved.
> -
> -NetApp provides this source code under the GPL v2 License.
> -The GPL v2 license is available at
> -http://opensource.org/licenses/gpl-license.php.
> -
> -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
> -CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
> -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
> -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
> -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
> -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
> -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
> -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> -
> -******************************************************************************/
> -
> -/*
> - * The NFSv4.1 callback service helper routines.
> - * They implement the transport level processing required to send the
> - * reply over an existing open connection previously established by the client.
> - */
> -
> -#include <linux/module.h>
> -
> -#include <linux/sunrpc/xprt.h>
> -#include <linux/sunrpc/sched.h>
> -#include <linux/sunrpc/bc_xprt.h>
> -
> -#define RPCDBG_FACILITY RPCDBG_SVCDSP
> -
> -/* Empty callback ops */
> -static const struct rpc_call_ops nfs41_callback_ops = {
> -};
> -
> -
> -/*
> - * Send the callback reply
> - */
> -int bc_send(struct rpc_rqst *req)
> -{
> - struct rpc_task *task;
> - int ret;
> -
> - dprintk("RPC: bc_send req= %p\n", req);
> - task = rpc_run_bc_task(req, &nfs41_callback_ops);
> - if (IS_ERR(task))
> - ret = PTR_ERR(task);
> - else {
> - WARN_ON_ONCE(atomic_read(&task->tk_count) != 1);
> - ret = task->tk_status;
> - rpc_put_task(task);
> - }
> - dprintk("RPC: bc_send ret= %d\n", ret);
> - return ret;
> -}
> -
> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> index 852ae60..f86b7be 100644
> --- a/net/sunrpc/svc.c
> +++ b/net/sunrpc/svc.c
> @@ -1350,6 +1350,11 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> {
> struct kvec *argv = &rqstp->rq_arg.head[0];
> struct kvec *resv = &rqstp->rq_res.head[0];
> + static const struct rpc_call_ops reply_ops = { };
> + struct rpc_task *task;
> + int error;
> +
> + dprintk("svc: %s(%p)\n", __func__, req);
>
> /* Build the svc_rqst used by the common processing routine */
> rqstp->rq_xprt = serv->sv_bc_xprt;
> @@ -1372,21 +1377,33 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
>
> /*
> * Skip the next two words because they've already been
> - * processed in the trasport
> + * processed in the transport
> */
> svc_getu32(argv); /* XID */
> svc_getnl(argv); /* CALLDIR */
>
> - /* Returns 1 for send, 0 for drop */
> - if (svc_process_common(rqstp, argv, resv)) {
> - memcpy(&req->rq_snd_buf, &rqstp->rq_res,
> - sizeof(req->rq_snd_buf));
> - return bc_send(req);
> - } else {
> - /* drop request */
> + /* Parse and execute the bc call */
> + if (!svc_process_common(rqstp, argv, resv)) {
> + /* Processing error: drop the request */
> xprt_free_bc_request(req);
> return 0;
> }
> +
> + /* Finally, send the reply synchronously */
> + memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
> + task = rpc_run_bc_task(req, &reply_ops);
> + if (IS_ERR(task)) {
> + error = PTR_ERR(task);
> + goto out;
> + }
> +
> + WARN_ON_ONCE(atomic_read(&task->tk_count) != 1);
> + error = task->tk_status;
> + rpc_put_task(task);
> +
> +out:
> + dprintk("svc: %s(), error=%d\n", __func__, error);
> + return error;
> }
> EXPORT_SYMBOL_GPL(bc_svc_process);
> #endif /* CONFIG_SUNRPC_BACKCHANNEL */
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2015-06-01 20:24:00

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 10/10] rpcrdma: Merge svcrdma and xprtrdma modules into one

On Tue, May 26, 2015 at 01:50:04PM -0400, Chuck Lever wrote:
> Bi-directional RPC support means code in svcrdma.ko invokes a bit of
> code in xprtrdma.ko, and vice versa. To avoid loader/linker loops,
> merge the server and client side modules together into a single
> module.
>
> When backchannel capabilities are added, the combined module will
> register all needed transport capabilities so that Upper Layer
> consumers automatically have everything needed to create a
> bi-directional transport connection.
>
> Module aliases are added for backwards compatibility with user
> space, which still may expect svcrdma.ko or xprtrdma.ko to be
> present.
>
> This commit reverts commit 2e8c12e1b765 ("xprtrdma: add separate
> Kconfig options for NFSoRDMA client and server support") and
> provides a single CONFIG option for enabling the new module.

The motivation for adding this was basically that in RHEL we wanted to
support client-side but not server-side NFS/RDMA.

I guess bidirectional RPC blurs the client/server distinction, so if we
wanted to preserve the ability to turn off server-side NFS/RDMA at
compiple time then that should go in nfsd.

Maybe it's less important now....

--b.

>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> net/sunrpc/Kconfig | 28 +++++++-----------------
> net/sunrpc/Makefile | 3 +--
> net/sunrpc/xprtrdma/Makefile | 14 +++++-------
> net/sunrpc/xprtrdma/module.c | 46 +++++++++++++++++++++++++++++++++++++++
> net/sunrpc/xprtrdma/svc_rdma.c | 8 +------
> net/sunrpc/xprtrdma/transport.c | 13 ++---------
> net/sunrpc/xprtrdma/xprt_rdma.h | 5 ++++
> 7 files changed, 69 insertions(+), 48 deletions(-)
> create mode 100644 net/sunrpc/xprtrdma/module.c
>
> diff --git a/net/sunrpc/Kconfig b/net/sunrpc/Kconfig
> index 9068e72..04ce2c0 100644
> --- a/net/sunrpc/Kconfig
> +++ b/net/sunrpc/Kconfig
> @@ -48,28 +48,16 @@ config SUNRPC_DEBUG
>
> If unsure, say Y.
>
> -config SUNRPC_XPRT_RDMA_CLIENT
> - tristate "RPC over RDMA Client Support"
> +config SUNRPC_XPRT_RDMA
> + tristate "RPC-over-RDMA transport"
> depends on SUNRPC && INFINIBAND && INFINIBAND_ADDR_TRANS
> default SUNRPC && INFINIBAND
> help
> - This option allows the NFS client to support an RDMA-enabled
> - transport.
> + This option allows the NFS client and server to use RDMA
> + transports (InfiniBand, iWARP, or RoCE).
>
> - To compile RPC client RDMA transport support as a module,
> - choose M here: the module will be called xprtrdma.
> + To compile this support as a module, choose M. The module
> + will be called rpcrdma.ko.
>
> - If unsure, say N.
> -
> -config SUNRPC_XPRT_RDMA_SERVER
> - tristate "RPC over RDMA Server Support"
> - depends on SUNRPC && INFINIBAND && INFINIBAND_ADDR_TRANS
> - default SUNRPC && INFINIBAND
> - help
> - This option allows the NFS server to support an RDMA-enabled
> - transport.
> -
> - To compile RPC server RDMA transport support as a module,
> - choose M here: the module will be called svcrdma.
> -
> - If unsure, say N.
> + If unsure, or you know there is no RDMA capability on your
> + hardware platform, say N.
> diff --git a/net/sunrpc/Makefile b/net/sunrpc/Makefile
> index 1b8e68d..b512fbd 100644
> --- a/net/sunrpc/Makefile
> +++ b/net/sunrpc/Makefile
> @@ -5,8 +5,7 @@
>
> obj-$(CONFIG_SUNRPC) += sunrpc.o
> obj-$(CONFIG_SUNRPC_GSS) += auth_gss/
> -
> -obj-y += xprtrdma/
> +obj-$(CONFIG_SUNRPC_XPRT_RDMA) += xprtrdma/
>
> sunrpc-y := clnt.o xprt.o socklib.o xprtsock.o sched.o \
> auth.o auth_null.o auth_unix.o auth_generic.o \
> diff --git a/net/sunrpc/xprtrdma/Makefile b/net/sunrpc/xprtrdma/Makefile
> index 579f72b..48913de 100644
> --- a/net/sunrpc/xprtrdma/Makefile
> +++ b/net/sunrpc/xprtrdma/Makefile
> @@ -1,9 +1,7 @@
> -obj-$(CONFIG_SUNRPC_XPRT_RDMA_CLIENT) += xprtrdma.o
> +obj-$(CONFIG_SUNRPC_XPRT_RDMA) += rpcrdma.o
>
> -xprtrdma-y := transport.o rpc_rdma.o verbs.o \
> - fmr_ops.o frwr_ops.o physical_ops.o
> -
> -obj-$(CONFIG_SUNRPC_XPRT_RDMA_SERVER) += svcrdma.o
> -
> -svcrdma-y := svc_rdma.o svc_rdma_transport.o \
> - svc_rdma_marshal.o svc_rdma_sendto.o svc_rdma_recvfrom.o
> +rpcrdma-y := transport.o rpc_rdma.o verbs.o \
> + fmr_ops.o frwr_ops.o physical_ops.o \
> + svc_rdma.o svc_rdma_transport.o \
> + svc_rdma_marshal.o svc_rdma_sendto.o svc_rdma_recvfrom.o \
> + module.o
> diff --git a/net/sunrpc/xprtrdma/module.c b/net/sunrpc/xprtrdma/module.c
> new file mode 100644
> index 0000000..560712b
> --- /dev/null
> +++ b/net/sunrpc/xprtrdma/module.c
> @@ -0,0 +1,46 @@
> +/*
> + * Copyright (c) 2015 Oracle. All rights reserved.
> + */
> +
> +/* rpcrdma.ko module initialization
> + */
> +
> +#include <linux/module.h>
> +#include <linux/init.h>
> +#include <linux/sunrpc/svc_rdma.h>
> +#include "xprt_rdma.h"
> +
> +#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
> +# define RPCDBG_FACILITY RPCDBG_TRANS
> +#endif
> +
> +MODULE_AUTHOR("Open Grid Computing and Network Appliance, Inc.");
> +MODULE_DESCRIPTION("RPC/RDMA Transport");
> +MODULE_LICENSE("Dual BSD/GPL");
> +MODULE_ALIAS("svcrdma");
> +MODULE_ALIAS("xprtrdma");
> +
> +static void __exit rpc_rdma_cleanup(void)
> +{
> + xprt_rdma_cleanup();
> + svc_rdma_cleanup();
> +}
> +
> +static int __init rpc_rdma_init(void)
> +{
> + int rc;
> +
> + rc = svc_rdma_init();
> + if (rc)
> + goto out;
> +
> + rc = xprt_rdma_init();
> + if (rc)
> + svc_rdma_cleanup();
> +
> +out:
> + return rc;
> +}
> +
> +module_init(rpc_rdma_init);
> +module_exit(rpc_rdma_cleanup);
> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
> index 7a18ae4..dc7d7a5 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
> @@ -38,8 +38,7 @@
> *
> * Author: Tom Tucker <[email protected]>
> */
> -#include <linux/module.h>
> -#include <linux/init.h>
> +
> #include <linux/slab.h>
> #include <linux/fs.h>
> #include <linux/sysctl.h>
> @@ -305,8 +304,3 @@ int svc_rdma_init(void)
> destroy_workqueue(svc_rdma_wq);
> return -ENOMEM;
> }
> -MODULE_AUTHOR("Tom Tucker <[email protected]>");
> -MODULE_DESCRIPTION("SVC RDMA Transport");
> -MODULE_LICENSE("Dual BSD/GPL");
> -module_init(svc_rdma_init);
> -module_exit(svc_rdma_cleanup);
> diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
> index 54f23b1..436da2c 100644
> --- a/net/sunrpc/xprtrdma/transport.c
> +++ b/net/sunrpc/xprtrdma/transport.c
> @@ -48,7 +48,6 @@
> */
>
> #include <linux/module.h>
> -#include <linux/init.h>
> #include <linux/slab.h>
> #include <linux/seq_file.h>
> #include <linux/sunrpc/addr.h>
> @@ -59,11 +58,6 @@
> # define RPCDBG_FACILITY RPCDBG_TRANS
> #endif
>
> -MODULE_LICENSE("Dual BSD/GPL");
> -
> -MODULE_DESCRIPTION("RPC/RDMA Transport for Linux kernel NFS");
> -MODULE_AUTHOR("Network Appliance, Inc.");
> -
> /*
> * tunables
> */
> @@ -711,7 +705,7 @@ static struct xprt_class xprt_rdma = {
> .setup = xprt_setup_rdma,
> };
>
> -static void __exit xprt_rdma_cleanup(void)
> +void xprt_rdma_cleanup(void)
> {
> int rc;
>
> @@ -728,7 +722,7 @@ static void __exit xprt_rdma_cleanup(void)
> __func__, rc);
> }
>
> -static int __init xprt_rdma_init(void)
> +int xprt_rdma_init(void)
> {
> int rc;
>
> @@ -753,6 +747,3 @@ static int __init xprt_rdma_init(void)
> #endif
> return 0;
> }
> -
> -module_init(xprt_rdma_init);
> -module_exit(xprt_rdma_cleanup);
> diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
> index e60907b..58163b8 100644
> --- a/net/sunrpc/xprtrdma/xprt_rdma.h
> +++ b/net/sunrpc/xprtrdma/xprt_rdma.h
> @@ -480,6 +480,11 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *);
> */
> int rpcrdma_marshal_req(struct rpc_rqst *);
>
> +/* RPC/RDMA module init - xprtrdma/transport.c
> + */
> +int xprt_rdma_init(void);
> +void xprt_rdma_cleanup(void);
> +
> /* Temporary NFS request map cache. Created in svc_rdma.c */
> extern struct kmem_cache *svc_rdma_map_cachep;
> /* WR context cache. Created in svc_rdma.c */

2015-06-01 20:24:20

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 09/10] SUNRPC: Clean up bc_send()

On Mon, Jun 01, 2015 at 04:19:59PM -0400, Chuck Lever wrote:
>
> On May 26, 2015, at 1:49 PM, Chuck Lever <[email protected]> wrote:
>
> > Clean up: Merge bc_send() into bc_svc_process().
> >
> > Note: even though this touches svc.c, it is a client-side change.
>
> I think Trond is taking this one. You can drop it. Sorry for the noise.

OK, no problem.--b.

>
>
> > Signed-off-by: Chuck Lever <[email protected]>
> > ---
> >
> > include/linux/sunrpc/bc_xprt.h | 1 -
> > net/sunrpc/Makefile | 2 +
> > net/sunrpc/bc_svc.c | 63 ----------------------------------------
> > net/sunrpc/svc.c | 33 ++++++++++++++++-----
> > 4 files changed, 26 insertions(+), 73 deletions(-)
> > delete mode 100644 net/sunrpc/bc_svc.c
> >
> > diff --git a/include/linux/sunrpc/bc_xprt.h b/include/linux/sunrpc/bc_xprt.h
> > index 2ca67b5..8df43c9 100644
> > --- a/include/linux/sunrpc/bc_xprt.h
> > +++ b/include/linux/sunrpc/bc_xprt.h
> > @@ -37,7 +37,6 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied);
> > void xprt_free_bc_request(struct rpc_rqst *req);
> > int xprt_setup_backchannel(struct rpc_xprt *, unsigned int min_reqs);
> > void xprt_destroy_backchannel(struct rpc_xprt *, unsigned int max_reqs);
> > -int bc_send(struct rpc_rqst *req);
> >
> > /*
> > * Determine if a shared backchannel is in use
> > diff --git a/net/sunrpc/Makefile b/net/sunrpc/Makefile
> > index 15e6f6c..1b8e68d 100644
> > --- a/net/sunrpc/Makefile
> > +++ b/net/sunrpc/Makefile
> > @@ -15,6 +15,6 @@ sunrpc-y := clnt.o xprt.o socklib.o xprtsock.o sched.o \
> > sunrpc_syms.o cache.o rpc_pipe.o \
> > svc_xprt.o
> > sunrpc-$(CONFIG_SUNRPC_DEBUG) += debugfs.o
> > -sunrpc-$(CONFIG_SUNRPC_BACKCHANNEL) += backchannel_rqst.o bc_svc.o
> > +sunrpc-$(CONFIG_SUNRPC_BACKCHANNEL) += backchannel_rqst.o
> > sunrpc-$(CONFIG_PROC_FS) += stats.o
> > sunrpc-$(CONFIG_SYSCTL) += sysctl.o
> > diff --git a/net/sunrpc/bc_svc.c b/net/sunrpc/bc_svc.c
> > deleted file mode 100644
> > index 15c7a8a..0000000
> > --- a/net/sunrpc/bc_svc.c
> > +++ /dev/null
> > @@ -1,63 +0,0 @@
> > -/******************************************************************************
> > -
> > -(c) 2007 Network Appliance, Inc. All Rights Reserved.
> > -(c) 2009 NetApp. All Rights Reserved.
> > -
> > -NetApp provides this source code under the GPL v2 License.
> > -The GPL v2 license is available at
> > -http://opensource.org/licenses/gpl-license.php.
> > -
> > -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> > -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> > -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> > -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
> > -CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
> > -EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
> > -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
> > -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
> > -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
> > -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
> > -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> > -
> > -******************************************************************************/
> > -
> > -/*
> > - * The NFSv4.1 callback service helper routines.
> > - * They implement the transport level processing required to send the
> > - * reply over an existing open connection previously established by the client.
> > - */
> > -
> > -#include <linux/module.h>
> > -
> > -#include <linux/sunrpc/xprt.h>
> > -#include <linux/sunrpc/sched.h>
> > -#include <linux/sunrpc/bc_xprt.h>
> > -
> > -#define RPCDBG_FACILITY RPCDBG_SVCDSP
> > -
> > -/* Empty callback ops */
> > -static const struct rpc_call_ops nfs41_callback_ops = {
> > -};
> > -
> > -
> > -/*
> > - * Send the callback reply
> > - */
> > -int bc_send(struct rpc_rqst *req)
> > -{
> > - struct rpc_task *task;
> > - int ret;
> > -
> > - dprintk("RPC: bc_send req= %p\n", req);
> > - task = rpc_run_bc_task(req, &nfs41_callback_ops);
> > - if (IS_ERR(task))
> > - ret = PTR_ERR(task);
> > - else {
> > - WARN_ON_ONCE(atomic_read(&task->tk_count) != 1);
> > - ret = task->tk_status;
> > - rpc_put_task(task);
> > - }
> > - dprintk("RPC: bc_send ret= %d\n", ret);
> > - return ret;
> > -}
> > -
> > diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> > index 852ae60..f86b7be 100644
> > --- a/net/sunrpc/svc.c
> > +++ b/net/sunrpc/svc.c
> > @@ -1350,6 +1350,11 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> > {
> > struct kvec *argv = &rqstp->rq_arg.head[0];
> > struct kvec *resv = &rqstp->rq_res.head[0];
> > + static const struct rpc_call_ops reply_ops = { };
> > + struct rpc_task *task;
> > + int error;
> > +
> > + dprintk("svc: %s(%p)\n", __func__, req);
> >
> > /* Build the svc_rqst used by the common processing routine */
> > rqstp->rq_xprt = serv->sv_bc_xprt;
> > @@ -1372,21 +1377,33 @@ bc_svc_process(struct svc_serv *serv, struct rpc_rqst *req,
> >
> > /*
> > * Skip the next two words because they've already been
> > - * processed in the trasport
> > + * processed in the transport
> > */
> > svc_getu32(argv); /* XID */
> > svc_getnl(argv); /* CALLDIR */
> >
> > - /* Returns 1 for send, 0 for drop */
> > - if (svc_process_common(rqstp, argv, resv)) {
> > - memcpy(&req->rq_snd_buf, &rqstp->rq_res,
> > - sizeof(req->rq_snd_buf));
> > - return bc_send(req);
> > - } else {
> > - /* drop request */
> > + /* Parse and execute the bc call */
> > + if (!svc_process_common(rqstp, argv, resv)) {
> > + /* Processing error: drop the request */
> > xprt_free_bc_request(req);
> > return 0;
> > }
> > +
> > + /* Finally, send the reply synchronously */
> > + memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf));
> > + task = rpc_run_bc_task(req, &reply_ops);
> > + if (IS_ERR(task)) {
> > + error = PTR_ERR(task);
> > + goto out;
> > + }
> > +
> > + WARN_ON_ONCE(atomic_read(&task->tk_count) != 1);
> > + error = task->tk_status;
> > + rpc_put_task(task);
> > +
> > +out:
> > + dprintk("svc: %s(), error=%d\n", __func__, error);
> > + return error;
> > }
> > EXPORT_SYMBOL_GPL(bc_svc_process);
> > #endif /* CONFIG_SUNRPC_BACKCHANNEL */
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
>
>

2015-06-01 20:26:42

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 08/10] svcrdma: Add backward direction service for RPC/RDMA transport

On Tue, May 26, 2015 at 01:49:45PM -0400, Chuck Lever wrote:
> Introduce some pre-requisite infrastructure needed for handling
> RPC/RDMA bi-direction on the client side.
>
> On NFSv4.1 mount points, the client uses this transport endpoint to
> receive backward direction calls and route replies back to the
> NFS server.

Am I missing something, or is this pretty much dead code for now?

In which case, I'd rather wait on it.

--b.

>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
>
> include/linux/sunrpc/svc_rdma.h | 6 +++
> include/linux/sunrpc/xprt.h | 1 +
> net/sunrpc/xprtrdma/svc_rdma.c | 6 +++
> net/sunrpc/xprtrdma/svc_rdma_transport.c | 59 ++++++++++++++++++++++++++++++
> 4 files changed, 71 insertions(+), 1 deletions(-)
>
> diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
> index cb94ee4..0e7234d 100644
> --- a/include/linux/sunrpc/svc_rdma.h
> +++ b/include/linux/sunrpc/svc_rdma.h
> @@ -231,9 +231,13 @@ extern void svc_rdma_put_frmr(struct svcxprt_rdma *,
> struct svc_rdma_fastreg_mr *);
> extern void svc_sq_reap(struct svcxprt_rdma *);
> extern void svc_rq_reap(struct svcxprt_rdma *);
> -extern struct svc_xprt_class svc_rdma_class;
> extern void svc_rdma_prep_reply_hdr(struct svc_rqst *);
>
> +extern struct svc_xprt_class svc_rdma_class;
> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
> +extern struct svc_xprt_class svc_rdma_bc_class;
> +#endif
> +
> /* svc_rdma.c */
> extern int svc_rdma_init(void);
> extern void svc_rdma_cleanup(void);
> diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
> index 8b93ef5..693f9f1 100644
> --- a/include/linux/sunrpc/xprt.h
> +++ b/include/linux/sunrpc/xprt.h
> @@ -150,6 +150,7 @@ enum xprt_transports {
> XPRT_TRANSPORT_TCP = IPPROTO_TCP,
> XPRT_TRANSPORT_BC_TCP = IPPROTO_TCP | XPRT_TRANSPORT_BC,
> XPRT_TRANSPORT_RDMA = 256,
> + XPRT_TRANSPORT_BC_RDMA = XPRT_TRANSPORT_RDMA | XPRT_TRANSPORT_BC,
> XPRT_TRANSPORT_LOCAL = 257,
> };
>
> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
> index 8eedb60..7a18ae4 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
> @@ -244,6 +244,9 @@ void svc_rdma_cleanup(void)
> unregister_sysctl_table(svcrdma_table_header);
> svcrdma_table_header = NULL;
> }
> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
> + svc_unreg_xprt_class(&svc_rdma_bc_class);
> +#endif
> svc_unreg_xprt_class(&svc_rdma_class);
> kmem_cache_destroy(svc_rdma_map_cachep);
> kmem_cache_destroy(svc_rdma_ctxt_cachep);
> @@ -291,6 +294,9 @@ int svc_rdma_init(void)
>
> /* Register RDMA with the SVC transport switch */
> svc_reg_xprt_class(&svc_rdma_class);
> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
> + svc_reg_xprt_class(&svc_rdma_bc_class);
> +#endif
> return 0;
> err1:
> kmem_cache_destroy(svc_rdma_map_cachep);
> diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> index 3b4c2ff..9b8bccd 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> @@ -56,6 +56,7 @@
>
> #define RPCDBG_FACILITY RPCDBG_SVCXPRT
>
> +static struct svcxprt_rdma *rdma_create_xprt(struct svc_serv *, int);
> static struct svc_xprt *svc_rdma_create(struct svc_serv *serv,
> struct net *net,
> struct sockaddr *sa, int salen,
> @@ -95,6 +96,64 @@ struct svc_xprt_class svc_rdma_class = {
> .xcl_ident = XPRT_TRANSPORT_RDMA,
> };
>
> +#if defined(CONFIG_SUNRPC_BACKCHANNEL)
> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *, struct net *,
> + struct sockaddr *, int, int);
> +static void svc_rdma_bc_detach(struct svc_xprt *);
> +static void svc_rdma_bc_free(struct svc_xprt *);
> +
> +static struct svc_xprt_ops svc_rdma_bc_ops = {
> + .xpo_create = svc_rdma_bc_create,
> + .xpo_detach = svc_rdma_bc_detach,
> + .xpo_free = svc_rdma_bc_free,
> + .xpo_prep_reply_hdr = svc_rdma_prep_reply_hdr,
> + .xpo_secure_port = svc_rdma_secure_port,
> +};
> +
> +struct svc_xprt_class svc_rdma_bc_class = {
> + .xcl_name = "rdma-bc",
> + .xcl_owner = THIS_MODULE,
> + .xcl_ops = &svc_rdma_bc_ops,
> + .xcl_max_payload = (1024 - RPCRDMA_HDRLEN_MIN),
> + .xcl_ident = XPRT_TRANSPORT_BC_RDMA,
> +};
> +
> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *serv,
> + struct net *net,
> + struct sockaddr *sa, int salen,
> + int flags)
> +{
> + struct svcxprt_rdma *cma_xprt;
> + struct svc_xprt *xprt;
> +
> + cma_xprt = rdma_create_xprt(serv, 0);
> + if (!cma_xprt)
> + return ERR_PTR(-ENOMEM);
> + xprt = &cma_xprt->sc_xprt;
> +
> + svc_xprt_init(net, &svc_rdma_bc_class, xprt, serv);
> + serv->sv_bc_xprt = xprt;
> +
> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
> + return xprt;
> +}
> +
> +static void svc_rdma_bc_detach(struct svc_xprt *xprt)
> +{
> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
> +}
> +
> +static void svc_rdma_bc_free(struct svc_xprt *xprt)
> +{
> + struct svcxprt_rdma *rdma =
> + container_of(xprt, struct svcxprt_rdma, sc_xprt);
> +
> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
> + if (xprt)
> + kfree(rdma);
> +}
> +#endif /* CONFIG_SUNRPC_BACKCHANNEL */
> +
> struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
> {
> struct svc_rdma_op_ctxt *ctxt;

2015-06-01 20:43:23

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH v2 08/10] svcrdma: Add backward direction service for RPC/RDMA transport


On Jun 1, 2015, at 4:26 PM, J. Bruce Fields <[email protected]> wrote:

> On Tue, May 26, 2015 at 01:49:45PM -0400, Chuck Lever wrote:
>> Introduce some pre-requisite infrastructure needed for handling
>> RPC/RDMA bi-direction on the client side.
>>
>> On NFSv4.1 mount points, the client uses this transport endpoint to
>> receive backward direction calls and route replies back to the
>> NFS server.
>
> Am I missing something, or is this pretty much dead code for now?
>
> In which case, I'd rather wait on it.

When I submit the client-side backchannel patches, should I submit
this patch through Anna and request your Acked-by?


> --b.
>
>>
>> Signed-off-by: Chuck Lever <[email protected]>
>> ---
>>
>> include/linux/sunrpc/svc_rdma.h | 6 +++
>> include/linux/sunrpc/xprt.h | 1 +
>> net/sunrpc/xprtrdma/svc_rdma.c | 6 +++
>> net/sunrpc/xprtrdma/svc_rdma_transport.c | 59 ++++++++++++++++++++++++++++++
>> 4 files changed, 71 insertions(+), 1 deletions(-)
>>
>> diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
>> index cb94ee4..0e7234d 100644
>> --- a/include/linux/sunrpc/svc_rdma.h
>> +++ b/include/linux/sunrpc/svc_rdma.h
>> @@ -231,9 +231,13 @@ extern void svc_rdma_put_frmr(struct svcxprt_rdma *,
>> struct svc_rdma_fastreg_mr *);
>> extern void svc_sq_reap(struct svcxprt_rdma *);
>> extern void svc_rq_reap(struct svcxprt_rdma *);
>> -extern struct svc_xprt_class svc_rdma_class;
>> extern void svc_rdma_prep_reply_hdr(struct svc_rqst *);
>>
>> +extern struct svc_xprt_class svc_rdma_class;
>> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
>> +extern struct svc_xprt_class svc_rdma_bc_class;
>> +#endif
>> +
>> /* svc_rdma.c */
>> extern int svc_rdma_init(void);
>> extern void svc_rdma_cleanup(void);
>> diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
>> index 8b93ef5..693f9f1 100644
>> --- a/include/linux/sunrpc/xprt.h
>> +++ b/include/linux/sunrpc/xprt.h
>> @@ -150,6 +150,7 @@ enum xprt_transports {
>> XPRT_TRANSPORT_TCP = IPPROTO_TCP,
>> XPRT_TRANSPORT_BC_TCP = IPPROTO_TCP | XPRT_TRANSPORT_BC,
>> XPRT_TRANSPORT_RDMA = 256,
>> + XPRT_TRANSPORT_BC_RDMA = XPRT_TRANSPORT_RDMA | XPRT_TRANSPORT_BC,
>> XPRT_TRANSPORT_LOCAL = 257,
>> };
>>
>> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
>> index 8eedb60..7a18ae4 100644
>> --- a/net/sunrpc/xprtrdma/svc_rdma.c
>> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
>> @@ -244,6 +244,9 @@ void svc_rdma_cleanup(void)
>> unregister_sysctl_table(svcrdma_table_header);
>> svcrdma_table_header = NULL;
>> }
>> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
>> + svc_unreg_xprt_class(&svc_rdma_bc_class);
>> +#endif
>> svc_unreg_xprt_class(&svc_rdma_class);
>> kmem_cache_destroy(svc_rdma_map_cachep);
>> kmem_cache_destroy(svc_rdma_ctxt_cachep);
>> @@ -291,6 +294,9 @@ int svc_rdma_init(void)
>>
>> /* Register RDMA with the SVC transport switch */
>> svc_reg_xprt_class(&svc_rdma_class);
>> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
>> + svc_reg_xprt_class(&svc_rdma_bc_class);
>> +#endif
>> return 0;
>> err1:
>> kmem_cache_destroy(svc_rdma_map_cachep);
>> diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
>> index 3b4c2ff..9b8bccd 100644
>> --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
>> +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
>> @@ -56,6 +56,7 @@
>>
>> #define RPCDBG_FACILITY RPCDBG_SVCXPRT
>>
>> +static struct svcxprt_rdma *rdma_create_xprt(struct svc_serv *, int);
>> static struct svc_xprt *svc_rdma_create(struct svc_serv *serv,
>> struct net *net,
>> struct sockaddr *sa, int salen,
>> @@ -95,6 +96,64 @@ struct svc_xprt_class svc_rdma_class = {
>> .xcl_ident = XPRT_TRANSPORT_RDMA,
>> };
>>
>> +#if defined(CONFIG_SUNRPC_BACKCHANNEL)
>> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *, struct net *,
>> + struct sockaddr *, int, int);
>> +static void svc_rdma_bc_detach(struct svc_xprt *);
>> +static void svc_rdma_bc_free(struct svc_xprt *);
>> +
>> +static struct svc_xprt_ops svc_rdma_bc_ops = {
>> + .xpo_create = svc_rdma_bc_create,
>> + .xpo_detach = svc_rdma_bc_detach,
>> + .xpo_free = svc_rdma_bc_free,
>> + .xpo_prep_reply_hdr = svc_rdma_prep_reply_hdr,
>> + .xpo_secure_port = svc_rdma_secure_port,
>> +};
>> +
>> +struct svc_xprt_class svc_rdma_bc_class = {
>> + .xcl_name = "rdma-bc",
>> + .xcl_owner = THIS_MODULE,
>> + .xcl_ops = &svc_rdma_bc_ops,
>> + .xcl_max_payload = (1024 - RPCRDMA_HDRLEN_MIN),
>> + .xcl_ident = XPRT_TRANSPORT_BC_RDMA,
>> +};
>> +
>> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *serv,
>> + struct net *net,
>> + struct sockaddr *sa, int salen,
>> + int flags)
>> +{
>> + struct svcxprt_rdma *cma_xprt;
>> + struct svc_xprt *xprt;
>> +
>> + cma_xprt = rdma_create_xprt(serv, 0);
>> + if (!cma_xprt)
>> + return ERR_PTR(-ENOMEM);
>> + xprt = &cma_xprt->sc_xprt;
>> +
>> + svc_xprt_init(net, &svc_rdma_bc_class, xprt, serv);
>> + serv->sv_bc_xprt = xprt;
>> +
>> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
>> + return xprt;
>> +}
>> +
>> +static void svc_rdma_bc_detach(struct svc_xprt *xprt)
>> +{
>> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
>> +}
>> +
>> +static void svc_rdma_bc_free(struct svc_xprt *xprt)
>> +{
>> + struct svcxprt_rdma *rdma =
>> + container_of(xprt, struct svcxprt_rdma, sc_xprt);
>> +
>> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
>> + if (xprt)
>> + kfree(rdma);
>> +}
>> +#endif /* CONFIG_SUNRPC_BACKCHANNEL */
>> +
>> struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
>> {
>> struct svc_rdma_op_ctxt *ctxt;

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2015-06-02 15:28:37

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 02/10] svcrdma: Add missing access_ok() call in svc_rdma.c

On Mon, Jun 01, 2015 at 04:01:15PM -0400, Chuck Lever wrote:
>
> On Jun 1, 2015, at 3:31 PM, J. Bruce Fields <[email protected]> wrote:
>
> > On Tue, May 26, 2015 at 01:48:47PM -0400, Chuck Lever wrote:
> >> Ensure a proper memory access check is done by read_reset_stat(),
> >> then fix the following compiler warning.
> >>
> >> In file included from linux-2.6/include/net/checksum.h:25,
> >> from linux-2.6/include/linux/skbuff.h:31,
> >> from linux-2.6/include/linux/icmpv6.h:4,
> >> from linux-2.6/include/linux/ipv6.h:64,
> >> from linux-2.6/include/net/ipv6.h:16,
> >> from linux-2.6/include/linux/sunrpc/clnt.h:27,
> >> from linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:47:
> >> In function ‘copy_to_user’,
> >> inlined from ‘read_reset_stat’ at
> >> linux-2.6/net/sunrpc/xprtrdma/svc_rdma.c:113:
> >> linux-2.6/arch/x86/include/asm/uaccess.h:735: warning:
> >> call to ‘__copy_to_user_overflow’ declared with attribute warning:
> >> copy_to_user() buffer size is not provably correct
> >
> > How do you get that warning? I can't hit it even with
> > CONFIG_USER_STRICT_USER_COPY_CHECKS set. Based on comments in arch/x86
> > I would have thought this would only trigger when len was a constant.
>
> I only seem to see this warning when building and testing on EL6, gcc
> version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC).
>
> include/linux/compiler-gcc4.h has this:
>
> 16 #if GCC_VERSION >= 40100 && GCC_VERSION < 40600
> 17 # define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
> 18 #endif
>
> and include/linux/compiler.h has this:
>
> 384 #ifndef __compiletime_object_size
> 385 # define __compiletime_object_size(obj) -1
> 386 #endif
>
> Now, arch/x86/include/asm/uaccess.h spells copy_to_user() this way:
>
> 722 static inline unsigned long __must_check
> 723 copy_to_user(void __user *to, const void *from, unsigned long n)
> 724 {
> 725 int sz = __compiletime_object_size(from);
> 726
> 727 might_fault();
> 728
> 729 /* See the comment in copy_from_user() above. */
> 730 if (likely(sz < 0 || sz >= n))
> 731 n = _copy_to_user(to, from, n);
> 732 else if(__builtin_constant_p(n))
> 733 copy_to_user_overflow();
> 734 else
> 735 __copy_to_user_overflow(sz, n);
> 736
> 737 return n;
> 738 }
>
> If __compiletime_object_size isn’t defined for your compiler version, then
> int sz is set to -1, and _copy_to_user() is invoked directly. Otherwise
> if n is a variable, the warning pops.
>
> This might be a false positive. The “from” parameter is always 32 bytes in
> read_reset_stat().

Huh. OK, I don't really understand this logic.

> I think adding an access_ok() call here is reasonable, though?

Well, we're just moving the access_ok() out of copy_to_user (by
switching to __copy_to_user) and doing it by hand ourselves instead.

It looks to me like the false positive is the bug. The comments in
uaccess.h at least suggest that they did intend to avoid such false
positives.

Dropping this for now.

--b.

>
> > --b.
> >
> >>
> >> Signed-off-by: Chuck Lever <[email protected]>
> >> ---
> >>
> >> net/sunrpc/xprtrdma/svc_rdma.c | 8 ++++++--
> >> 1 files changed, 6 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
> >> index c1b6270..8eedb60 100644
> >> --- a/net/sunrpc/xprtrdma/svc_rdma.c
> >> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
> >> @@ -98,7 +98,11 @@ static int read_reset_stat(struct ctl_table *table, int write,
> >> else {
> >> char str_buf[32];
> >> char *data;
> >> - int len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
> >> + int len;
> >> +
> >> + if (!access_ok(VERIFY_WRITE, buffer, *lenp))
> >> + return -EFAULT;
> >> + len = snprintf(str_buf, 32, "%d\n", atomic_read(stat));
> >> if (len >= 32)
> >> return -EFAULT;
> >> len = strlen(str_buf);
> >> @@ -110,7 +114,7 @@ static int read_reset_stat(struct ctl_table *table, int write,
> >> len -= *ppos;
> >> if (len > *lenp)
> >> len = *lenp;
> >> - if (len && copy_to_user(buffer, str_buf, len))
> >> + if (len && __copy_to_user(buffer, str_buf, len))
> >> return -EFAULT;
> >> *lenp = len;
> >> *ppos += len;
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
>
>

2015-06-02 15:30:55

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 08/10] svcrdma: Add backward direction service for RPC/RDMA transport

On Mon, Jun 01, 2015 at 04:45:26PM -0400, Chuck Lever wrote:
>
> On Jun 1, 2015, at 4:26 PM, J. Bruce Fields <[email protected]> wrote:
>
> > On Tue, May 26, 2015 at 01:49:45PM -0400, Chuck Lever wrote:
> >> Introduce some pre-requisite infrastructure needed for handling
> >> RPC/RDMA bi-direction on the client side.
> >>
> >> On NFSv4.1 mount points, the client uses this transport endpoint to
> >> receive backward direction calls and route replies back to the
> >> NFS server.
> >
> > Am I missing something, or is this pretty much dead code for now?
> >
> > In which case, I'd rather wait on it.
>
> When I submit the client-side backchannel patches, should I submit
> this patch through Anna and request your Acked-by?

Sure, feel free to add my Acked-by.

Or I guess I won't worry too much about some dead code if the
client-side patches are coming soon.

Whatever's simplest for you.

--b.

>
>
> > --b.
> >
> >>
> >> Signed-off-by: Chuck Lever <[email protected]>
> >> ---
> >>
> >> include/linux/sunrpc/svc_rdma.h | 6 +++
> >> include/linux/sunrpc/xprt.h | 1 +
> >> net/sunrpc/xprtrdma/svc_rdma.c | 6 +++
> >> net/sunrpc/xprtrdma/svc_rdma_transport.c | 59 ++++++++++++++++++++++++++++++
> >> 4 files changed, 71 insertions(+), 1 deletions(-)
> >>
> >> diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
> >> index cb94ee4..0e7234d 100644
> >> --- a/include/linux/sunrpc/svc_rdma.h
> >> +++ b/include/linux/sunrpc/svc_rdma.h
> >> @@ -231,9 +231,13 @@ extern void svc_rdma_put_frmr(struct svcxprt_rdma *,
> >> struct svc_rdma_fastreg_mr *);
> >> extern void svc_sq_reap(struct svcxprt_rdma *);
> >> extern void svc_rq_reap(struct svcxprt_rdma *);
> >> -extern struct svc_xprt_class svc_rdma_class;
> >> extern void svc_rdma_prep_reply_hdr(struct svc_rqst *);
> >>
> >> +extern struct svc_xprt_class svc_rdma_class;
> >> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
> >> +extern struct svc_xprt_class svc_rdma_bc_class;
> >> +#endif
> >> +
> >> /* svc_rdma.c */
> >> extern int svc_rdma_init(void);
> >> extern void svc_rdma_cleanup(void);
> >> diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
> >> index 8b93ef5..693f9f1 100644
> >> --- a/include/linux/sunrpc/xprt.h
> >> +++ b/include/linux/sunrpc/xprt.h
> >> @@ -150,6 +150,7 @@ enum xprt_transports {
> >> XPRT_TRANSPORT_TCP = IPPROTO_TCP,
> >> XPRT_TRANSPORT_BC_TCP = IPPROTO_TCP | XPRT_TRANSPORT_BC,
> >> XPRT_TRANSPORT_RDMA = 256,
> >> + XPRT_TRANSPORT_BC_RDMA = XPRT_TRANSPORT_RDMA | XPRT_TRANSPORT_BC,
> >> XPRT_TRANSPORT_LOCAL = 257,
> >> };
> >>
> >> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
> >> index 8eedb60..7a18ae4 100644
> >> --- a/net/sunrpc/xprtrdma/svc_rdma.c
> >> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
> >> @@ -244,6 +244,9 @@ void svc_rdma_cleanup(void)
> >> unregister_sysctl_table(svcrdma_table_header);
> >> svcrdma_table_header = NULL;
> >> }
> >> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
> >> + svc_unreg_xprt_class(&svc_rdma_bc_class);
> >> +#endif
> >> svc_unreg_xprt_class(&svc_rdma_class);
> >> kmem_cache_destroy(svc_rdma_map_cachep);
> >> kmem_cache_destroy(svc_rdma_ctxt_cachep);
> >> @@ -291,6 +294,9 @@ int svc_rdma_init(void)
> >>
> >> /* Register RDMA with the SVC transport switch */
> >> svc_reg_xprt_class(&svc_rdma_class);
> >> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
> >> + svc_reg_xprt_class(&svc_rdma_bc_class);
> >> +#endif
> >> return 0;
> >> err1:
> >> kmem_cache_destroy(svc_rdma_map_cachep);
> >> diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> >> index 3b4c2ff..9b8bccd 100644
> >> --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
> >> +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> >> @@ -56,6 +56,7 @@
> >>
> >> #define RPCDBG_FACILITY RPCDBG_SVCXPRT
> >>
> >> +static struct svcxprt_rdma *rdma_create_xprt(struct svc_serv *, int);
> >> static struct svc_xprt *svc_rdma_create(struct svc_serv *serv,
> >> struct net *net,
> >> struct sockaddr *sa, int salen,
> >> @@ -95,6 +96,64 @@ struct svc_xprt_class svc_rdma_class = {
> >> .xcl_ident = XPRT_TRANSPORT_RDMA,
> >> };
> >>
> >> +#if defined(CONFIG_SUNRPC_BACKCHANNEL)
> >> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *, struct net *,
> >> + struct sockaddr *, int, int);
> >> +static void svc_rdma_bc_detach(struct svc_xprt *);
> >> +static void svc_rdma_bc_free(struct svc_xprt *);
> >> +
> >> +static struct svc_xprt_ops svc_rdma_bc_ops = {
> >> + .xpo_create = svc_rdma_bc_create,
> >> + .xpo_detach = svc_rdma_bc_detach,
> >> + .xpo_free = svc_rdma_bc_free,
> >> + .xpo_prep_reply_hdr = svc_rdma_prep_reply_hdr,
> >> + .xpo_secure_port = svc_rdma_secure_port,
> >> +};
> >> +
> >> +struct svc_xprt_class svc_rdma_bc_class = {
> >> + .xcl_name = "rdma-bc",
> >> + .xcl_owner = THIS_MODULE,
> >> + .xcl_ops = &svc_rdma_bc_ops,
> >> + .xcl_max_payload = (1024 - RPCRDMA_HDRLEN_MIN),
> >> + .xcl_ident = XPRT_TRANSPORT_BC_RDMA,
> >> +};
> >> +
> >> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *serv,
> >> + struct net *net,
> >> + struct sockaddr *sa, int salen,
> >> + int flags)
> >> +{
> >> + struct svcxprt_rdma *cma_xprt;
> >> + struct svc_xprt *xprt;
> >> +
> >> + cma_xprt = rdma_create_xprt(serv, 0);
> >> + if (!cma_xprt)
> >> + return ERR_PTR(-ENOMEM);
> >> + xprt = &cma_xprt->sc_xprt;
> >> +
> >> + svc_xprt_init(net, &svc_rdma_bc_class, xprt, serv);
> >> + serv->sv_bc_xprt = xprt;
> >> +
> >> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
> >> + return xprt;
> >> +}
> >> +
> >> +static void svc_rdma_bc_detach(struct svc_xprt *xprt)
> >> +{
> >> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
> >> +}
> >> +
> >> +static void svc_rdma_bc_free(struct svc_xprt *xprt)
> >> +{
> >> + struct svcxprt_rdma *rdma =
> >> + container_of(xprt, struct svcxprt_rdma, sc_xprt);
> >> +
> >> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
> >> + if (xprt)
> >> + kfree(rdma);
> >> +}
> >> +#endif /* CONFIG_SUNRPC_BACKCHANNEL */
> >> +
> >> struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
> >> {
> >> struct svc_rdma_op_ctxt *ctxt;
>
> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
>
>

2015-06-03 19:46:52

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH v2 08/10] svcrdma: Add backward direction service for RPC/RDMA transport


On Jun 2, 2015, at 11:30 AM, J. Bruce Fields <[email protected]> wrote:

> On Mon, Jun 01, 2015 at 04:45:26PM -0400, Chuck Lever wrote:
>>
>> On Jun 1, 2015, at 4:26 PM, J. Bruce Fields <[email protected]> wrote:
>>
>>> On Tue, May 26, 2015 at 01:49:45PM -0400, Chuck Lever wrote:
>>>> Introduce some pre-requisite infrastructure needed for handling
>>>> RPC/RDMA bi-direction on the client side.
>>>>
>>>> On NFSv4.1 mount points, the client uses this transport endpoint to
>>>> receive backward direction calls and route replies back to the
>>>> NFS server.
>>>
>>> Am I missing something, or is this pretty much dead code for now?
>>>
>>> In which case, I'd rather wait on it.
>>
>> When I submit the client-side backchannel patches, should I submit
>> this patch through Anna and request your Acked-by?
>
> Sure, feel free to add my Acked-by.
>
> Or I guess I won't worry too much about some dead code if the
> client-side patches are coming soon.

There are more pre-requisites on the client than there are on the server.
I anticipate it will be at least two more merge windows before the client
will be ready to merge backchannel support.

I?ll add this patch back into the client backchannel series.


> Whatever's simplest for you.
>
> --b.
>
>>
>>
>>> --b.
>>>
>>>>
>>>> Signed-off-by: Chuck Lever <[email protected]>
>>>> ---
>>>>
>>>> include/linux/sunrpc/svc_rdma.h | 6 +++
>>>> include/linux/sunrpc/xprt.h | 1 +
>>>> net/sunrpc/xprtrdma/svc_rdma.c | 6 +++
>>>> net/sunrpc/xprtrdma/svc_rdma_transport.c | 59 ++++++++++++++++++++++++++++++
>>>> 4 files changed, 71 insertions(+), 1 deletions(-)
>>>>
>>>> diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
>>>> index cb94ee4..0e7234d 100644
>>>> --- a/include/linux/sunrpc/svc_rdma.h
>>>> +++ b/include/linux/sunrpc/svc_rdma.h
>>>> @@ -231,9 +231,13 @@ extern void svc_rdma_put_frmr(struct svcxprt_rdma *,
>>>> struct svc_rdma_fastreg_mr *);
>>>> extern void svc_sq_reap(struct svcxprt_rdma *);
>>>> extern void svc_rq_reap(struct svcxprt_rdma *);
>>>> -extern struct svc_xprt_class svc_rdma_class;
>>>> extern void svc_rdma_prep_reply_hdr(struct svc_rqst *);
>>>>
>>>> +extern struct svc_xprt_class svc_rdma_class;
>>>> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
>>>> +extern struct svc_xprt_class svc_rdma_bc_class;
>>>> +#endif
>>>> +
>>>> /* svc_rdma.c */
>>>> extern int svc_rdma_init(void);
>>>> extern void svc_rdma_cleanup(void);
>>>> diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
>>>> index 8b93ef5..693f9f1 100644
>>>> --- a/include/linux/sunrpc/xprt.h
>>>> +++ b/include/linux/sunrpc/xprt.h
>>>> @@ -150,6 +150,7 @@ enum xprt_transports {
>>>> XPRT_TRANSPORT_TCP = IPPROTO_TCP,
>>>> XPRT_TRANSPORT_BC_TCP = IPPROTO_TCP | XPRT_TRANSPORT_BC,
>>>> XPRT_TRANSPORT_RDMA = 256,
>>>> + XPRT_TRANSPORT_BC_RDMA = XPRT_TRANSPORT_RDMA | XPRT_TRANSPORT_BC,
>>>> XPRT_TRANSPORT_LOCAL = 257,
>>>> };
>>>>
>>>> diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
>>>> index 8eedb60..7a18ae4 100644
>>>> --- a/net/sunrpc/xprtrdma/svc_rdma.c
>>>> +++ b/net/sunrpc/xprtrdma/svc_rdma.c
>>>> @@ -244,6 +244,9 @@ void svc_rdma_cleanup(void)
>>>> unregister_sysctl_table(svcrdma_table_header);
>>>> svcrdma_table_header = NULL;
>>>> }
>>>> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
>>>> + svc_unreg_xprt_class(&svc_rdma_bc_class);
>>>> +#endif
>>>> svc_unreg_xprt_class(&svc_rdma_class);
>>>> kmem_cache_destroy(svc_rdma_map_cachep);
>>>> kmem_cache_destroy(svc_rdma_ctxt_cachep);
>>>> @@ -291,6 +294,9 @@ int svc_rdma_init(void)
>>>>
>>>> /* Register RDMA with the SVC transport switch */
>>>> svc_reg_xprt_class(&svc_rdma_class);
>>>> +#ifdef CONFIG_SUNRPC_BACKCHANNEL
>>>> + svc_reg_xprt_class(&svc_rdma_bc_class);
>>>> +#endif
>>>> return 0;
>>>> err1:
>>>> kmem_cache_destroy(svc_rdma_map_cachep);
>>>> diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
>>>> index 3b4c2ff..9b8bccd 100644
>>>> --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
>>>> +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
>>>> @@ -56,6 +56,7 @@
>>>>
>>>> #define RPCDBG_FACILITY RPCDBG_SVCXPRT
>>>>
>>>> +static struct svcxprt_rdma *rdma_create_xprt(struct svc_serv *, int);
>>>> static struct svc_xprt *svc_rdma_create(struct svc_serv *serv,
>>>> struct net *net,
>>>> struct sockaddr *sa, int salen,
>>>> @@ -95,6 +96,64 @@ struct svc_xprt_class svc_rdma_class = {
>>>> .xcl_ident = XPRT_TRANSPORT_RDMA,
>>>> };
>>>>
>>>> +#if defined(CONFIG_SUNRPC_BACKCHANNEL)
>>>> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *, struct net *,
>>>> + struct sockaddr *, int, int);
>>>> +static void svc_rdma_bc_detach(struct svc_xprt *);
>>>> +static void svc_rdma_bc_free(struct svc_xprt *);
>>>> +
>>>> +static struct svc_xprt_ops svc_rdma_bc_ops = {
>>>> + .xpo_create = svc_rdma_bc_create,
>>>> + .xpo_detach = svc_rdma_bc_detach,
>>>> + .xpo_free = svc_rdma_bc_free,
>>>> + .xpo_prep_reply_hdr = svc_rdma_prep_reply_hdr,
>>>> + .xpo_secure_port = svc_rdma_secure_port,
>>>> +};
>>>> +
>>>> +struct svc_xprt_class svc_rdma_bc_class = {
>>>> + .xcl_name = "rdma-bc",
>>>> + .xcl_owner = THIS_MODULE,
>>>> + .xcl_ops = &svc_rdma_bc_ops,
>>>> + .xcl_max_payload = (1024 - RPCRDMA_HDRLEN_MIN),
>>>> + .xcl_ident = XPRT_TRANSPORT_BC_RDMA,
>>>> +};
>>>> +
>>>> +static struct svc_xprt *svc_rdma_bc_create(struct svc_serv *serv,
>>>> + struct net *net,
>>>> + struct sockaddr *sa, int salen,
>>>> + int flags)
>>>> +{
>>>> + struct svcxprt_rdma *cma_xprt;
>>>> + struct svc_xprt *xprt;
>>>> +
>>>> + cma_xprt = rdma_create_xprt(serv, 0);
>>>> + if (!cma_xprt)
>>>> + return ERR_PTR(-ENOMEM);
>>>> + xprt = &cma_xprt->sc_xprt;
>>>> +
>>>> + svc_xprt_init(net, &svc_rdma_bc_class, xprt, serv);
>>>> + serv->sv_bc_xprt = xprt;
>>>> +
>>>> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
>>>> + return xprt;
>>>> +}
>>>> +
>>>> +static void svc_rdma_bc_detach(struct svc_xprt *xprt)
>>>> +{
>>>> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
>>>> +}
>>>> +
>>>> +static void svc_rdma_bc_free(struct svc_xprt *xprt)
>>>> +{
>>>> + struct svcxprt_rdma *rdma =
>>>> + container_of(xprt, struct svcxprt_rdma, sc_xprt);
>>>> +
>>>> + dprintk("svcrdma: %s(%p)\n", __func__, xprt);
>>>> + if (xprt)
>>>> + kfree(rdma);
>>>> +}
>>>> +#endif /* CONFIG_SUNRPC_BACKCHANNEL */
>>>> +
>>>> struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
>>>> {
>>>> struct svc_rdma_op_ctxt *ctxt;
>>
>> --
>> Chuck Lever
>> chuck[dot]lever[at]oracle[dot]com
>>
>>

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2015-06-03 19:47:42

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v2 08/10] svcrdma: Add backward direction service for RPC/RDMA transport

On Wed, Jun 03, 2015 at 03:49:13PM -0400, Chuck Lever wrote:
>
> On Jun 2, 2015, at 11:30 AM, J. Bruce Fields <[email protected]> wrote:
>
> > On Mon, Jun 01, 2015 at 04:45:26PM -0400, Chuck Lever wrote:
> >>
> >> On Jun 1, 2015, at 4:26 PM, J. Bruce Fields <[email protected]> wrote:
> >>
> >>> On Tue, May 26, 2015 at 01:49:45PM -0400, Chuck Lever wrote:
> >>>> Introduce some pre-requisite infrastructure needed for handling
> >>>> RPC/RDMA bi-direction on the client side.
> >>>>
> >>>> On NFSv4.1 mount points, the client uses this transport endpoint to
> >>>> receive backward direction calls and route replies back to the
> >>>> NFS server.
> >>>
> >>> Am I missing something, or is this pretty much dead code for now?
> >>>
> >>> In which case, I'd rather wait on it.
> >>
> >> When I submit the client-side backchannel patches, should I submit
> >> this patch through Anna and request your Acked-by?
> >
> > Sure, feel free to add my Acked-by.
> >
> > Or I guess I won't worry too much about some dead code if the
> > client-side patches are coming soon.
>
> There are more pre-requisites on the client than there are on the server.
> I anticipate it will be at least two more merge windows before the client
> will be ready to merge backchannel support.
>
> I’ll add this patch back into the client backchannel series.

OK.--b.