Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2248071pxb; Fri, 5 Feb 2021 12:41:36 -0800 (PST) X-Google-Smtp-Source: ABdhPJzX0dR2ReHrDla57rs/DAaYqG83uaS9I2dTDsflsa5HO6JZ3bE18XERVaIbi6dCtTJ1gH+t X-Received: by 2002:a17:906:2e0c:: with SMTP id n12mr5764949eji.312.1612557696198; Fri, 05 Feb 2021 12:41:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612557696; cv=none; d=google.com; s=arc-20160816; b=wVNj/SS0rNntasA/RGIcHFtqurfJwmvNGoLnPvwEw6zvnEZaNiHWehIenexSE4cWGu mXhz28/6aFKFDQ6sNzCkiw3U3Pqo6Egv50EWZ5Xv8HyeOYxwTtzPThIIhzIbbLAFm0bb XFRLra+l1rYggLiVWgaf9lhf4g/z/dijL3rJXLtz6voxFHK2MejXbqUnC3p6SwDOHvOX IcKYCsprTcCJiGHf1tZB5j8spyBrYR1SofgP5OrPtJJzOGg89ZINk7d/VIwrX3irwR8t tSOrR+i5pgK1X8HxnRhQ7KDuXxzRJyisi3zllFHLKOktOf8ViGwaaDp75YvClXkHyxQw FY9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:message-id:date:cc:to:from:subject; bh=IKtIx4UyuP+SZacpzREbbSSN0G4kPa2SGOSzriBBVDU=; b=0sA4u84/Ut3tuDQSITPjyxN2NZvB9CMhEeaZLvSD353IR0PNXkJ/GYsxPjOBHtwTwf hZKL21uoit4UFYMWBziGIHLShOftXUXOEh+3ljneVLmyFLLkKc4Pqu64K+1tIdx4Qu+r OZ2dy3Li0miu4uNSFN2r6qKfp2M2j8jCXFxB5S1YMtMGvdd0oT/wHmuPfRVsmcfmm20U gwGM21s0ahuaWPNE5id2rglNHS2y9idG2PNsAnf11xNTkrZPCw22n8xqEXissFjDg9Mr 8TixoyCXIKaUH0mTvzHt+ulfMhJ/gKDhZZuHF0RfiWV6RjFY4UxjfKOinpWJ05uyWwpk TeLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b63si6328163edf.480.2021.02.05.12.41.05; Fri, 05 Feb 2021 12:41:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232651AbhBESwp (ORCPT + 99 others); Fri, 5 Feb 2021 13:52:45 -0500 Received: from mail.kernel.org ([198.145.29.99]:45582 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233088AbhBEPEk (ORCPT ); Fri, 5 Feb 2021 10:04:40 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 0A87864E92; Fri, 5 Feb 2021 16:31:35 +0000 (UTC) Subject: [PATCH v1] xprtrdma: Clean up rpcrdma_prepare_readch() From: Chuck Lever To: anna.schumaker@netapp.com Cc: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Date: Fri, 05 Feb 2021 11:31:34 -0500 Message-ID: <161254265485.1728.15776929905868209914.stgit@manet.1015granger.net> User-Agent: StGit/0.23-29-ga622f1 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Since commit 9ed5af268e88 ("SUNRPC: Clean up the handling of page padding in rpc_prepare_reply_pages()") [Dec 2020] the NFS client passes payload data to the transport with the padding in xdr->pages instead of in the send buffer's tail kvec. There's no need for the extra logic to advance the base of the tail kvec because the upper layer no longer places XDR padding there. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/rpc_rdma.c | 27 +++++++++------------------ 1 file changed, 9 insertions(+), 18 deletions(-) Hi Anna- Found one more clean-up related to the series I sent yesterday. diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index 1c3e377272e0..292f066d006e 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -628,9 +628,8 @@ static bool rpcrdma_prepare_pagelist(struct rpcrdma_req *req, return false; } -/* The tail iovec may include an XDR pad for the page list, - * as well as additional content, and may not reside in the - * same page as the head iovec. +/* The tail iovec might not reside in the same page as the + * head iovec. */ static bool rpcrdma_prepare_tail_iov(struct rpcrdma_req *req, struct xdr_buf *xdr, @@ -748,27 +747,19 @@ static bool rpcrdma_prepare_readch(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req, struct xdr_buf *xdr) { + struct kvec *tail = &xdr->tail[0]; + if (!rpcrdma_prepare_head_iov(r_xprt, req, xdr->head[0].iov_len)) return false; - /* If there is a Read chunk, the page list is being handled + /* If there is a Read chunk, the page list is handled * via explicit RDMA, and thus is skipped here. */ - /* Do not include the tail if it is only an XDR pad */ - if (xdr->tail[0].iov_len > 3) { - unsigned int page_base, len; - - /* If the content in the page list is an odd length, - * xdr_write_pages() adds a pad at the beginning of - * the tail iovec. Force the tail's non-pad content to - * land at the next XDR position in the Send message. - */ - page_base = offset_in_page(xdr->tail[0].iov_base); - len = xdr->tail[0].iov_len; - page_base += len & 3; - len -= len & 3; - if (!rpcrdma_prepare_tail_iov(req, xdr, page_base, len)) + if (tail->iov_len) { + if (!rpcrdma_prepare_tail_iov(req, xdr, + offset_in_page(tail->iov_base), + tail->iov_len)) return false; kref_get(&req->rl_kref); }