Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp157ybv; Tue, 4 Feb 2020 23:48:26 -0800 (PST) X-Google-Smtp-Source: APXvYqwCXMsYR3Nwdjg8iE0iLKX11hunFdiCsnyZ12rATokrfnkRWLWHDcYEitmUKzzCqvX2GgSe X-Received: by 2002:aca:aa0a:: with SMTP id t10mr1943082oie.156.1580888906691; Tue, 04 Feb 2020 23:48:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1580888906; cv=none; d=google.com; s=arc-20160816; b=RWlnN9uIAUTiXosQ3IODCPZjewSy8zCXSwBD7njCFjTgyH5Ac5W0p2udoHvojchC7q +LKUVADyTiMzr7Tpl+hVn9GHUek0gJl8ulsXbOlJpLPLecMnYGIpflwhfv0ZI4Dd6KkG 8Ee7MIXbRav9s7lyW8xodJsD7StWEMxT8nQTPnFNLhlZ2MRQoB6TEpVirQ84Ls5htsQX mUDV7mxnN76Lqjza4uIqjxJiWP90hRjLPmmr9Bc6RgWbK47lAGBjNDeTjqQTbSWD6YI8 zkMSsYOOnGWvYCbfHhEbmv0PIcUrZIZui7Vzri+yQc/hML/nr3wbzsU0/IrkjgkrVR7m m3jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=LceVz68GpWJcOWNJC9tEjGyiyhAFRWgYgCq4BBsKAYg=; b=QMJMYF5OnMisuzJQ7EXMkZrUdXXUuK7zv0P3Z5tc4MP1MTH74IFsE2a+QjLBcsRVv5 mPqSPHLRg3AOj679O57jG4jel7uG99hhLo9he/BBWZvedD20+PJW42wGTun7mumg+ggO zzUp9q7rBVi1xymQnHP9o/Fm1eX16iztpPf/ybbB5PdRIUza/uo5YzGibwDNndOVNH12 PHv/ZhCznNFiOAaO6jmU8E/iP0ZWhuPvwEL7o3JxP9zOLqKCv05VRBsVwft76XsLafUp G1oa2hz3vKGD4VKQA5jRWjPcz3kTYs4fNm0oCxRLa2qnOL1yKowSP5UI2TqP2nOt23OO SPRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h21si13859875otk.196.2020.02.04.23.47.41; Tue, 04 Feb 2020 23:48:26 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728012AbgBEHns (ORCPT + 99 others); Wed, 5 Feb 2020 02:43:48 -0500 Received: from nautica.notk.org ([91.121.71.147]:46412 "EHLO nautica.notk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725468AbgBEHns (ORCPT ); Wed, 5 Feb 2020 02:43:48 -0500 Received: by nautica.notk.org (Postfix, from userid 1001) id AC309C009; Wed, 5 Feb 2020 08:35:19 +0100 (CET) Date: Wed, 5 Feb 2020 08:35:04 +0100 From: Dominique Martinet To: Sergey Alirzaev Cc: v9fs-developer@lists.sourceforge.net, Eric Van Hensbergen , Latchesar Ionkov , "David S. Miller" , Jakub Kicinski , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] 9pnet: allow making incomplete read requests Message-ID: <20200205073504.GA16626@nautica> References: <20200205003457.24340-1-l29ah@cock.li> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20200205003457.24340-1-l29ah@cock.li> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Sergey Alirzaev wrote on Wed, Feb 05, 2020: > A user doesn't necessarily want to wait for all the requested data to > be available, since the waiting time is unbounded. I'm not sure I agree on the argument there: the waiting time is unbounded for a single request as well. What's your use case? I think it would be better to describe what you really do with O_NONBLOCK that requires this, and not just false theoritical arguments. > Signed-off-by: Sergey Alirzaev Code-wise looks mostly good, just a nitpick about keeping the total variable in p9_client_read_once inline. > --- > include/net/9p/client.h | 2 + > net/9p/client.c | 133 ++++++++++++++++++++++------------------ > 2 files changed, 76 insertions(+), 59 deletions(-) > > --- a/net/9p/client.c > +++ b/net/9p/client.c > @@ -1548,83 +1548,98 @@ EXPORT_SYMBOL(p9_client_unlinkat); > > int > p9_client_read(struct p9_fid *fid, u64 offset, struct iov_iter *to, int *err) > +{ > + int total = 0; > + *err = 0; > + > + while (iov_iter_count(to)) { > + int count; > + > + count = p9_client_read_once(fid, offset, to, err); > + if (!count || *err) > + break; > + offset += count; > + total += count; > + } > + return total; > +} > +EXPORT_SYMBOL(p9_client_read); > + > +int > +p9_client_read_once(struct p9_fid *fid, u64 offset, struct iov_iter *to, > + int *err) > { > struct p9_client *clnt = fid->clnt; > struct p9_req_t *req; > int total = 0; total only makes sense in an iterating p9_client_read, I think it makes code harder to read here (can basically use count or 0 directly now) > - *err = 0; > + int count = iov_iter_count(to); > + int rsize, non_zc = 0; > + char *dataptr; > > + *err = 0; > p9_debug(P9_DEBUG_9P, ">>> TREAD fid %d offset %llu %d\n", > fid->fid, (unsigned long long) offset, (int)iov_iter_count(to)); > > - while (iov_iter_count(to)) { > - int count = iov_iter_count(to); > - int rsize, non_zc = 0; > - char *dataptr; > + rsize = fid->iounit; > + if (!rsize || rsize > clnt->msize - P9_IOHDRSZ) > + rsize = clnt->msize - P9_IOHDRSZ; > > - rsize = fid->iounit; > - if (!rsize || rsize > clnt->msize-P9_IOHDRSZ) > - rsize = clnt->msize - P9_IOHDRSZ; > + if (count < rsize) > + rsize = count; > > - if (count < rsize) > - rsize = count; > + /* Don't bother zerocopy for small IO (< 1024) */ > + if (clnt->trans_mod->zc_request && rsize > 1024) { > + /* response header len is 11 > + * PDU Header(7) + IO Size (4) > + */ > + req = p9_client_zc_rpc(clnt, P9_TREAD, to, NULL, rsize, > + 0, 11, "dqd", fid->fid, > + offset, rsize); > + } else { > + non_zc = 1; > + req = p9_client_rpc(clnt, P9_TREAD, "dqd", fid->fid, offset, > + rsize); > + } > + if (IS_ERR(req)) { > + *err = PTR_ERR(req); > + return total; > + } > > - /* Don't bother zerocopy for small IO (< 1024) */ > - if (clnt->trans_mod->zc_request && rsize > 1024) { > - /* > - * response header len is 11 > - * PDU Header(7) + IO Size (4) > - */ > - req = p9_client_zc_rpc(clnt, P9_TREAD, to, NULL, rsize, > - 0, 11, "dqd", fid->fid, > - offset, rsize); > - } else { > - non_zc = 1; > - req = p9_client_rpc(clnt, P9_TREAD, "dqd", fid->fid, offset, > - rsize); > - } > - if (IS_ERR(req)) { > - *err = PTR_ERR(req); > - break; > - } > + *err = p9pdu_readf(&req->rc, clnt->proto_version, > + "D", &count, &dataptr); > + if (*err) { > + trace_9p_protocol_dump(clnt, &req->rc); > + p9_tag_remove(clnt, req); > + return total; > + } > + if (rsize < count) { > + pr_err("bogus RREAD count (%d > %d)\n", count, rsize); > + count = rsize; > + } > > - *err = p9pdu_readf(&req->rc, clnt->proto_version, > - "D", &count, &dataptr); > - if (*err) { > - trace_9p_protocol_dump(clnt, &req->rc); > - p9_tag_remove(clnt, req); > - break; > - } > - if (rsize < count) { > - pr_err("bogus RREAD count (%d > %d)\n", count, rsize); > - count = rsize; > - } > + p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", count); > + if (!count) { > + p9_tag_remove(clnt, req); > + return total; > + } > > - p9_debug(P9_DEBUG_9P, "<<< RREAD count %d\n", count); > - if (!count) { > - p9_tag_remove(clnt, req); > - break; > - } > + if (non_zc) { > + int n = copy_to_iter(dataptr, count, to); > > - if (non_zc) { > - int n = copy_to_iter(dataptr, count, to); > - total += n; > - offset += n; > - if (n != count) { > - *err = -EFAULT; > - p9_tag_remove(clnt, req); > - break; > - } > - } else { > - iov_iter_advance(to, count); > - total += count; > - offset += count; > + total += n; > + if (n != count) { > + *err = -EFAULT; > + p9_tag_remove(clnt, req); > + return total; > } > - p9_tag_remove(clnt, req); > + } else { > + iov_iter_advance(to, count); > + total += count; > } > + p9_tag_remove(clnt, req); > return total; > } > -EXPORT_SYMBOL(p9_client_read); > +EXPORT_SYMBOL(p9_client_read_once); > > int > p9_client_write(struct p9_fid *fid, u64 offset, struct iov_iter *from, int *err) -- Dominique