Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752381AbdHSXl2 (ORCPT ); Sat, 19 Aug 2017 19:41:28 -0400 Received: from mail-co1nam03on0122.outbound.protection.outlook.com ([104.47.40.122]:43392 "EHLO NAM03-CO1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752031AbdHSXl0 (ORCPT ); Sat, 19 Aug 2017 19:41:26 -0400 From: Long Li To: Tom Talpey , Steve French , "linux-cifs@vger.kernel.org" , "samba-technical@lists.samba.org" , "linux-kernel@vger.kernel.org" , "linux-rdma@vger.kernel.org" Subject: RE: [[PATCH v1] 07/37] [CIFS] SMBD: Implement receive buffer for handling SMBD response Thread-Topic: [[PATCH v1] 07/37] [CIFS] SMBD: Implement receive buffer for handling SMBD response Thread-Index: AQHTC8upB7mVCAIvRkGgRtwxxQU8SqKEWr+AgAgVHjA= Date: Sat, 19 Aug 2017 23:41:08 +0000 Message-ID: References: <1501704648-20159-1-git-send-email-longli@exchange.microsoft.com> <1501704648-20159-8-git-send-email-longli@exchange.microsoft.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Enabled=True; MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SiteId=72f988bf-86f1-41af-91ab-2d7cd011db47; MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Ref=https://api.informationprotection.azure.com/api/72f988bf-86f1-41af-91ab-2d7cd011db47; MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Owner=longli@microsoft.com; MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_SetDate=2017-08-19T16:41:06.5157032-07:00; MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Name=General; MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Application=Microsoft Azure Information Protection; MSIP_Label_f42aa342-8706-4288-bd11-ebb85995028c_Extended_MSFT_Method=Automatic; Sensitivity=General x-originating-ip: [2001:4898:80e8:c::735] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1;MWHPR21MB0701;6:E1f+RLRhs6QAzWREnyBoFg4ECMTqTgf1fedXzwXUB64k1LC1p9FagBd1iInpXlhzcktTpUSP7cfYPNY73nf/48A1Z/dTt2Yipe0Vg0SnzZn8JsNmaATuAfOobiOSVBmYjS3DnFVzQZf+2afpVGndC2Oz6E1tDKoegMthZO3fwHYAxe6bSpZ0NK42QzVKlJfksFlqSKc3L9U4xarttPqkkpD7uTuLjgBkTsazyl35AMwfVOs9SFr5qHVj9uTtZydmCR79GbM9AsyGlZU41CDB7254RqP4+NHK8+F7IMkyUxcAgg/KjxKHOcDFi7vdBdmIvwksO/JO6vCLWGDMWGmetA==;5:4m65Ur3uTBEAik+AU2zgU7fxqKW9g4rhDmY91CFA4rMQ3OEahiMMsSaeYtsXwI5yZ5HNkzv+fLeOjPQHUBvb+PA7znKGcPL8WkbFGJXkl+gy1EgGKQ+pZ+FzLwyGz3hvPKX6G1P1zToUzPX79GZ3OQ==;24:SBq2O3QUAonMf71jc2eGQDKgLJFjX8aWgSnTD4O3pJ6uRVS4BYUHsb3Zq1dMZpubj1ZzgWjxnXIOkEnWRHaPEZpgdryIElEdHoieHNe2opg=;7:02AZqAKZAWA2tT/IBoYTQIpgxkMDs/W13gsNecWECCCyPk4PvIaCD7lIZi1iUVUJwoVioK2fiiIt3hGJRFeOuUeqpjEZE4JhNLV9TkCXGZq9bokKw7M12ZfjXcMdfFrlZWFKQRMcOUuHuEgI53lArsKajGnhYSHjgOSN++FkV7NHd1cltnAMq70Oz75cuon+L3wwdRmBX5FQ1+QmiHYUGuPsLw3H3s4l2/MFg73mhEE= x-ms-exchange-antispam-srfa-diagnostics: SSOS; x-ms-office365-filtering-correlation-id: 77868b3c-bc2b-4b5f-cc3f-08d4e75bc44e x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(300000502095)(300135100095)(22001)(2017030254152)(48565401081)(300000503095)(300135400095)(2017052603157)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095);SRVR:MWHPR21MB0701; x-ms-traffictypediagnostic: MWHPR21MB0701: authentication-results: spf=none (sender IP is ) smtp.mailfrom=longli@microsoft.com; x-exchange-antispam-report-test: UriScan:(158342451672863)(89211679590171)(9452136761055); x-microsoft-antispam-prvs: x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(61425038)(6040450)(601004)(2401047)(8121501046)(5005006)(93006095)(93001095)(10201501046)(100000703101)(100105400095)(3002001)(6055026)(61426038)(61427038)(6041248)(20161123562025)(20161123564025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(20161123555025)(20161123558100)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095);SRVR:MWHPR21MB0701;BCL:0;PCL:0;RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);SRVR:MWHPR21MB0701; x-forefront-prvs: 04041A2886 x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(6009001)(39860400002)(47760400005)(13464003)(377454003)(199003)(189002)(5005710100001)(102836003)(6116002)(14454004)(53936002)(25786009)(33656002)(74316002)(2561002)(86362001)(2900100001)(10090500001)(55016002)(478600001)(101416001)(2421001)(10290500003)(97736004)(5660300001)(2501003)(86612001)(9686003)(7696004)(8990500004)(105586002)(6436002)(189998001)(2950100002)(99286003)(106356001)(3280700002)(77096006)(68736007)(53546010)(305945005)(7736002)(3660700001)(81156014)(6506006)(8676002)(2201001)(81166006)(1511001)(2906002)(6246003)(229853002)(50986999)(76176999)(54356999)(8936002);DIR:OUT;SFP:1102;SCL:1;SRVR:MWHPR21MB0701;H:MWHPR21MB0190.namprd21.prod.outlook.com;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Aug 2017 23:41:08.6683 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR21MB0701 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by nfs id v7JNfqAf024709 Content-Length: 5826 Lines: 136 > -----Original Message----- > From: Tom Talpey > Sent: Monday, August 14, 2017 1:09 PM > To: Long Li ; Steve French ; > linux-cifs@vger.kernel.org; samba-technical@lists.samba.org; linux- > kernel@vger.kernel.org; linux-rdma@vger.kernel.org > Subject: RE: [[PATCH v1] 07/37] [CIFS] SMBD: Implement receive buffer for > handling SMBD response > > > -----Original Message----- > > From: linux-cifs-owner@vger.kernel.org [mailto:linux-cifs- > > owner@vger.kernel.org] On Behalf Of Long Li > > Sent: Wednesday, August 2, 2017 4:10 PM > > To: Steve French ; linux-cifs@vger.kernel.org; > > samba- technical@lists.samba.org; linux-kernel@vger.kernel.org > > Cc: Long Li > > Subject: [[PATCH v1] 07/37] [CIFS] SMBD: Implement receive buffer for > > handling SMBD response > > > > +/* > > + * Receive buffer operations. > > + * For each remote send, we need to post a receive. The receive > > +buffers are > > + * pre-allocated in advance. > > + */ > > This approach appears to have been derived from the NFS/RDMA one. > The SMB protocol operates very differently! It is not a strict request/ > response protocol. Many operations can become asynchronous by the > server choosing to make a STATUS_PENDING reply. A second reply then > comes later. The SMB2_CANCEL operation normally has no reply at all. > And callbacks for oplocks can occur at any time. I think you misunderstood the receiver buffers. They are posted so the remote peer can post a send. The remote peer's receive credit is calculated based on how many receive buffer have been posted. The code doesn't assume one post_send needs one corresponding post_recv. In practice, receive buffers are posted as soon as possible to extend receive credits to the remote peer. > > Even within a single request, many replies can be received. For example, an > SMB2_READ response which exceeds your negotiated receive size of 8192. > These will be fragmented by SMB Direct into a "train" of multiple messages, > which will be logically reassembled by the receiver. Each of them will > consume a credit. > > Thanks to SMB Direct crediting, the connection is not failing, but you are > undoubtedly spending a lot of time and ping-ponging to re-post receives and > allow the message trains to flow. And, because it's never one-to-one, there > are also unneeded receives posted before and after such exchanges. > > You need to use SMB Direct crediting to post a more traffic-sensitive pool of > receives, and simply manage its depth when posting client requests. > As a start, I'd suggest simply choosing a constant number, approximately > whatever credit value you actually negotiate with the peer. Then, just > replenish (re-post) receive buffers as they are completed by the adapter. > You can get more sophisticated about this strategy later. The code behaves exactly the same as you described. It uses a constant to decide how many receive buffer to post. It's not very smart and can be improved. > > Tom. > > > +static struct cifs_rdma_response* get_receive_buffer(struct > > +cifs_rdma_info > > *info) > > +{ > > + struct cifs_rdma_response *ret = NULL; > > + unsigned long flags; > > + > > + spin_lock_irqsave(&info->receive_queue_lock, flags); > > + if (!list_empty(&info->receive_queue)) { > > + ret = list_first_entry( > > + &info->receive_queue, > > + struct cifs_rdma_response, list); > > + list_del(&ret->list); > > + info->count_receive_buffer--; > > + info->count_get_receive_buffer++; > > + } > > + spin_unlock_irqrestore(&info->receive_queue_lock, flags); > > + > > + return ret; > > +} > > + > > +static void put_receive_buffer( > > + struct cifs_rdma_info *info, struct cifs_rdma_response > > +*response) { > > + unsigned long flags; > > + > > + ib_dma_unmap_single(info->id->device, response->sge.addr, > > + response->sge.length, DMA_FROM_DEVICE); > > + > > + spin_lock_irqsave(&info->receive_queue_lock, flags); > > + list_add_tail(&response->list, &info->receive_queue); > > + info->count_receive_buffer++; > > + info->count_put_receive_buffer++; > > + spin_unlock_irqrestore(&info->receive_queue_lock, flags); } > > + > > +static int allocate_receive_buffers(struct cifs_rdma_info *info, int > > +num_buf) { > > + int i; > > + struct cifs_rdma_response *response; > > + > > + INIT_LIST_HEAD(&info->receive_queue); > > + spin_lock_init(&info->receive_queue_lock); > > + > > + for (i=0; i > + response = mempool_alloc(info->response_mempool, > GFP_KERNEL); > > + if (!response) > > + goto allocate_failed; > > + > > + response->info = info; > > + list_add_tail(&response->list, &info->receive_queue); > > + info->count_receive_buffer++; > > + } > > + > > + return 0; > > + > > +allocate_failed: > > + while (!list_empty(&info->receive_queue)) { > > + response = list_first_entry( > > + &info->receive_queue, > > + struct cifs_rdma_response, list); > > + list_del(&response->list); > > + info->count_receive_buffer--; > > + > > + mempool_free(response, info->response_mempool); > > + } > > + return -ENOMEM; > > +} > > + > > +static void destroy_receive_buffers(struct cifs_rdma_info *info) { > > + struct cifs_rdma_response *response; > > + while ((response = get_receive_buffer(info))) > > + mempool_free(response, info->response_mempool); } > > +