Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753742AbdHTTHC (ORCPT ); Sun, 20 Aug 2017 15:07:02 -0400 Received: from a2nlsmtp01-02.prod.iad2.secureserver.net ([198.71.225.36]:53674 "EHLO a2nlsmtp01-02.prod.iad2.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753720AbdHTTHA (ORCPT ); Sun, 20 Aug 2017 15:07:00 -0400 x-originating-ip: 107.180.71.197 From: Long Li To: Steve French , linux-cifs@vger.kernel.org, samba-technical@lists.samba.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Christoph Hellwig , Tom Talpey , Matthew Wilcox Cc: Long Li Subject: [Patch v2 17/19] CIFS: SMBD: Implement SMB READ via RDMA write through memory registration Date: Sun, 20 Aug 2017 12:04:41 -0700 Message-Id: <1503255883-3041-18-git-send-email-longli@exchange.microsoft.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1503255883-3041-1-git-send-email-longli@exchange.microsoft.com> References: <1503255883-3041-1-git-send-email-longli@exchange.microsoft.com> X-CMAE-Envelope: MS4wfHa9CyBMxBsneTEnzvIMMm2D2MOElAlCACvmXK7dQJazDbf6CnUO+R5MAXxY2+aTiHGBc55wVNknvrskIWUj9tRJ90XR2ZKY7WTH6OnRRSu454WSVgJ7 wHP8bigX70R8Q6zp7d5XbSbWVxW4TGJJKUEgxdtxrIdIOkFp9Z1WsVBEx5smpHtdGYZ3kA+rw7q0N+AYxR07zlO6ZLN1In8R2tBIYgw3zrtGL2k52/x0u4nt a1nbnmmrKvfalI8vr+vMaLSzne8Y+Ci1ZoxZUBF2zL2/i7Zqmnk9TKr+wIj7j7dyhd/L/kOKw7sbmPjlaK3xrg/JnojIyhd76aH8/gDiqKTDJgQIAYeygFpk FKL9A/isqJdGQe+QDwzzZ7sycUnX5IV0vhEv3D3KxTxq4Lxx4wz4c40cllzlmue01w5nax41xeL0aqQfnJr3GTLVxLjOAz0iCHQVlN2C0c/I8r09E3Nf8Ods ScHYj6fgHN3z3AoA Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2881 Lines: 86 From: Long Li If I/O size is larger than rdma_readwrite_threshold, use RDMA write for SMB READ by specifying channel SMB2_CHANNEL_RDMA_V1 or SMB2_CHANNEL_RDMA_V1_INVALIDATE, depending on SMB dialect used. When RDMA write is used, there is no need to read from the transport for incoming payload. At the time SMB READ response comes back, the data is already transfered and placed in the pages by RDMA. Signed-off-by: Long Li --- fs/cifs/file.c | 5 +++++ fs/cifs/smb2pdu.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index dec70b3..41460a5 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -42,6 +42,7 @@ #include "cifs_debug.h" #include "cifs_fs_sb.h" #include "fscache.h" +#include "smbdirect.h" static inline int cifs_convert_flags(unsigned int flags) @@ -3037,6 +3038,8 @@ uncached_fill_pages(struct TCP_Server_Info *server, } if (iter) result = copy_page_from_iter(page, 0, n, iter); + else if (rdata->mr) + result = n; else result = cifs_read_page_from_socket(server, page, n); if (result < 0) @@ -3606,6 +3609,8 @@ readpages_fill_pages(struct TCP_Server_Info *server, if (iter) result = copy_page_from_iter(page, 0, n, iter); + else if (rdata->mr) + result = n; else result = cifs_read_page_from_socket(server, page, n); if (result < 0) diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index fbad987..1f08c75 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -2392,6 +2392,39 @@ smb2_new_read_req(void **buf, unsigned int *total_len, req->Length = cpu_to_le32(io_parms->length); req->Offset = cpu_to_le64(io_parms->offset); + /* + * If we want to do a RDMA write, fill in and append + * smbd_buffer_descriptor_v1 to the end of read request + */ + if (server->rdma && rdata && + rdata->bytes > server->smbd_conn->rdma_readwrite_threshold) { + + struct smbd_buffer_descriptor_v1 *v1; + bool need_invalidate = + io_parms->tcon->ses->server->dialect == SMB30_PROT_ID; + + rdata->mr = smbd_register_mr( + server->smbd_conn, rdata->pages, + rdata->nr_pages, rdata->tailsz, + true, need_invalidate); + if (!rdata->mr) + return -ENOBUFS; + + req->Channel = SMB2_CHANNEL_RDMA_V1_INVALIDATE; + if (need_invalidate) + req->Channel = SMB2_CHANNEL_RDMA_V1; + req->ReadChannelInfoOffset = + offsetof(struct smb2_read_plain_req, Buffer); + req->ReadChannelInfoLength = + sizeof(struct smbd_buffer_descriptor_v1); + v1 = (struct smbd_buffer_descriptor_v1 *) &req->Buffer[0]; + v1->offset = rdata->mr->mr->iova; + v1->token = rdata->mr->mr->rkey; + v1->length = rdata->mr->mr->length; + + *total_len += sizeof(*v1) - 1; + } + if (request_type & CHAINED_REQUEST) { if (!(request_type & END_OF_CHAIN)) { /* next 8-byte aligned request */ -- 2.7.4