Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752419AbdHNUJL (ORCPT ); Mon, 14 Aug 2017 16:09:11 -0400 Received: from mail-dm3nam03on0135.outbound.protection.outlook.com ([104.47.41.135]:52587 "EHLO NAM03-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751129AbdHNUJI (ORCPT ); Mon, 14 Aug 2017 16:09:08 -0400 From: Tom Talpey To: Long Li , Steve French , "linux-cifs@vger.kernel.org" , "samba-technical@lists.samba.org" , "linux-kernel@vger.kernel.org" , "linux-rdma@vger.kernel.org" Subject: RE: [[PATCH v1] 07/37] [CIFS] SMBD: Implement receive buffer for handling SMBD response Thread-Topic: [[PATCH v1] 07/37] [CIFS] SMBD: Implement receive buffer for handling SMBD response Thread-Index: AQHTC8utbPxVvjA5pEKgxcc6jKmrRKKEVwQw Date: Mon, 14 Aug 2017 20:09:01 +0000 Message-ID: References: <1501704648-20159-1-git-send-email-longli@exchange.microsoft.com> <1501704648-20159-8-git-send-email-longli@exchange.microsoft.com> In-Reply-To: <1501704648-20159-8-git-send-email-longli@exchange.microsoft.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [24.218.182.144] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1;CY4PR21MB0167;6:TDUG9skIhUv3rYZ2tqbRHgV6S3sJRVCsRTI7ktsasF6CVoNJ5oeUIalMUycFd9Iskxi7cWROuyEkOq+XvKNBhmqd705JlUeJ+8baS0KXxcpedJXB+pvnoqJYnai9d4zZIBfTj/ZiRCqysrv8xxLc/sskavUK0deEVPjuvQnyiLlH7WsOp52ha0/0PDHsapjQ85gWAzR/alYF8loFGmXIJE4DkXzgVldieMmdjpO2xdIMqK2sKpZMvY95sMfs08ooq/+DJ6QNzqzoEToPFRYcpWjrqezpSC9d/CFu+rg1wl2IoU6nOwt2pg7XHQFzOEdV+AVDNt90wmEnKzlqzaoGKA==;5:t7dDZ9AxcE6XzNEcOUWqJK52HKtSi+UyVnywBbolCQ6aMsgxsesg4V0YdMlJJAWHXANM76YtaOuDXfRj80t2H6tOHi7EzNegeM9NuqNY5fdbKoLHHVTBzlys/cNYtoOj4uqQk2LXA9nii5fMqi2laA==;24:c0X8qYHQ86Taghav+mxHheH+BMPNZNQeNGh0Kwu9Tcb1K2OA0Evv1qUgR8sgKka7GQcG7tlicsuWXK0kXTwH/l+sRcME36SfDfLGl1/vw0k=;7:r1Bjzx52NKj48Ll3CCvT0XohSd+/cGLCHmSwIR8kJnVnBnG9gXTFmKXFy4TXh1jJDU2jmwoTx0GmFzQk0w+u7Q93Zr4QuHPctuvobzjE2kff87nZ+++IHASnftCpZVPN+jVsReQ62Kfu0zFUu5Jmr92FmtO304Byo/t324OfVn2YVRwTs5tnw9eDynG4gqhFgMsq88MAwenQtri1wXsxJ09XrBWmv5xzptD5vO1c0wQ= x-ms-office365-filtering-correlation-id: be197278-5f32-4d29-790e-08d4e350512e x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(48565401081)(300000503095)(300135400095)(2017052603142)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095);SRVR:CY4PR21MB0167; x-ms-traffictypediagnostic: CY4PR21MB0167: x-exchange-antispam-report-test: UriScan:(158342451672863)(89211679590171)(9452136761055); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(61425038)(6040450)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(100000703101)(100105400095)(93006095)(93001095)(6055026)(61426038)(61427038)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123564025)(20161123555025)(20161123560025)(20161123562025)(20161123558100)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:CY4PR21MB0167;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:CY4PR21MB0167; x-forefront-prvs: 039975700A x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(979002)(6009001)(39860400002)(47760400005)(199003)(189002)(377454003)(13464003)(66066001)(25786009)(229853002)(8936002)(189998001)(5005710100001)(8990500004)(97736004)(77096006)(53936002)(55016002)(6246003)(99286003)(9686003)(6506006)(76176999)(54356999)(86612001)(50986999)(86362001)(53546010)(81166006)(81156014)(5660300001)(3846002)(102836003)(6116002)(8676002)(7696004)(2501003)(14454004)(2201001)(106356001)(101416001)(2906002)(105586002)(10090500001)(2900100001)(2950100002)(6666003)(68736007)(1511001)(305945005)(7736002)(6436002)(74316002)(10290500003)(3280700002)(33656002)(478600001)(3660700001)(969003)(989001)(999001)(1009001)(1019001);DIR:OUT;SFP:1102;SCL:1;SRVR:CY4PR21MB0167;H:CY4PR21MB0182.namprd21.prod.outlook.com;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; authentication-results: spf=none (sender IP is ) smtp.mailfrom=ttalpey@microsoft.com; spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Aug 2017 20:09:01.9814 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR21MB0167 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by nfs id v7EK9HaO031624 Content-Length: 4668 Lines: 119 > -----Original Message----- > From: linux-cifs-owner@vger.kernel.org [mailto:linux-cifs- > owner@vger.kernel.org] On Behalf Of Long Li > Sent: Wednesday, August 2, 2017 4:10 PM > To: Steve French ; linux-cifs@vger.kernel.org; samba- > technical@lists.samba.org; linux-kernel@vger.kernel.org > Cc: Long Li > Subject: [[PATCH v1] 07/37] [CIFS] SMBD: Implement receive buffer for > handling SMBD response > > +/* > + * Receive buffer operations. > + * For each remote send, we need to post a receive. The receive buffers are > + * pre-allocated in advance. > + */ This approach appears to have been derived from the NFS/RDMA one. The SMB protocol operates very differently! It is not a strict request/ response protocol. Many operations can become asynchronous by the server choosing to make a STATUS_PENDING reply. A second reply then comes later. The SMB2_CANCEL operation normally has no reply at all. And callbacks for oplocks can occur at any time. Even within a single request, many replies can be received. For example, an SMB2_READ response which exceeds your negotiated receive size of 8192. These will be fragmented by SMB Direct into a "train" of multiple messages, which will be logically reassembled by the receiver. Each of them will consume a credit. Thanks to SMB Direct crediting, the connection is not failing, but you are undoubtedly spending a lot of time and ping-ponging to re-post receives and allow the message trains to flow. And, because it's never one-to-one, there are also unneeded receives posted before and after such exchanges. You need to use SMB Direct crediting to post a more traffic-sensitive pool of receives, and simply manage its depth when posting client requests. As a start, I'd suggest simply choosing a constant number, approximately whatever credit value you actually negotiate with the peer. Then, just replenish (re-post) receive buffers as they are completed by the adapter. You can get more sophisticated about this strategy later. Tom. > +static struct cifs_rdma_response* get_receive_buffer(struct cifs_rdma_info > *info) > +{ > + struct cifs_rdma_response *ret = NULL; > + unsigned long flags; > + > + spin_lock_irqsave(&info->receive_queue_lock, flags); > + if (!list_empty(&info->receive_queue)) { > + ret = list_first_entry( > + &info->receive_queue, > + struct cifs_rdma_response, list); > + list_del(&ret->list); > + info->count_receive_buffer--; > + info->count_get_receive_buffer++; > + } > + spin_unlock_irqrestore(&info->receive_queue_lock, flags); > + > + return ret; > +} > + > +static void put_receive_buffer( > + struct cifs_rdma_info *info, struct cifs_rdma_response *response) > +{ > + unsigned long flags; > + > + ib_dma_unmap_single(info->id->device, response->sge.addr, > + response->sge.length, DMA_FROM_DEVICE); > + > + spin_lock_irqsave(&info->receive_queue_lock, flags); > + list_add_tail(&response->list, &info->receive_queue); > + info->count_receive_buffer++; > + info->count_put_receive_buffer++; > + spin_unlock_irqrestore(&info->receive_queue_lock, flags); > +} > + > +static int allocate_receive_buffers(struct cifs_rdma_info *info, int num_buf) > +{ > + int i; > + struct cifs_rdma_response *response; > + > + INIT_LIST_HEAD(&info->receive_queue); > + spin_lock_init(&info->receive_queue_lock); > + > + for (i=0; i + response = mempool_alloc(info->response_mempool, GFP_KERNEL); > + if (!response) > + goto allocate_failed; > + > + response->info = info; > + list_add_tail(&response->list, &info->receive_queue); > + info->count_receive_buffer++; > + } > + > + return 0; > + > +allocate_failed: > + while (!list_empty(&info->receive_queue)) { > + response = list_first_entry( > + &info->receive_queue, > + struct cifs_rdma_response, list); > + list_del(&response->list); > + info->count_receive_buffer--; > + > + mempool_free(response, info->response_mempool); > + } > + return -ENOMEM; > +} > + > +static void destroy_receive_buffers(struct cifs_rdma_info *info) > +{ > + struct cifs_rdma_response *response; > + while ((response = get_receive_buffer(info))) > + mempool_free(response, info->response_mempool); > +} > +