Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3326779pxb; Mon, 1 Mar 2021 07:19:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJwiLSs2FI3GNRBdSUgimymiuzYITcB8N4iMEDHMq5FjKtuG2PXlaXEI3z9IV4cFLXEk+uTj X-Received: by 2002:aa7:d0cb:: with SMTP id u11mr5100115edo.163.1614611992514; Mon, 01 Mar 2021 07:19:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614611992; cv=none; d=google.com; s=arc-20160816; b=T1wGI6M4ocT6dunkp4nRAp0jaiT220Pmfkoc6mCVvSSGyUgvv1fTBuO20lhm+xyjjD wGHnNRnXfpHuNULw8bxIxet3Z8x2QH/IZWfrrMBLyZUyo+7lV+ZWh0rGAXy4zBfdUl+g 87riIve+IT55t5Ttc3CVIMJJfd0Ia8Y6Tjoo5usrWzuykb5wk6j0yOctL4nsTM/WvJOV UQQupVqXGD3EiEYIvuig5PgZNeZmj2SPGGpkE0qpVN3odc6qjy6NJmcgGXjBMF9YRvV2 JuFE00yhzsMbcYuC9PEDw9Yk9SYWxO4RvBfIaXSLE/oXaT10WJ/fU7K7f/BgHIUpCRj8 Q/DA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:to:from:subject; bh=T6ECZoWZUynrZxsimVzZuYZfjymfBUYqQAlILGqPXHY=; b=uUFRAYOrL0Z4r7Rq9x255iGQtz+bqKye4zzoo4kP0hw71zF/tmot9IOYmJL+TY53gY BrwH43B3YRZwtS8y2uSa9rxIJ1vyd/yOpnsLkMDmy1IK+zkxadtjksa8e5RKASAmAdD2 9NC4w0aLvDnuonaZXFlev9y1jVDfQGmR0Gktn8Y98AYTRxVWKNEKT+JdS8ypDSA2974F sqrB/8EE5ZF6lMILaqwaWGfVtTwnKG1BgbUwqTBnszmJo89l6ifH33gYYOnb2qp7Qt05 /DEGOyncDggd10wNzAA5hX1OS5n95IH9PuoSFT2Uh+n/Daj4F+quFLWth7o8sfWiDEuj Fz7A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id qu2si2052193ejb.373.2021.03.01.07.19.26; Mon, 01 Mar 2021 07:19:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237029AbhCAPSb (ORCPT + 99 others); Mon, 1 Mar 2021 10:18:31 -0500 Received: from mail.kernel.org ([198.145.29.99]:40966 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237079AbhCAPSM (ORCPT ); Mon, 1 Mar 2021 10:18:12 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id D4A19614A5 for ; Mon, 1 Mar 2021 15:17:31 +0000 (UTC) Subject: [PATCH v1 22/42] NFSD: Reduce svc_rqst::rq_pages churn during READDIR operations From: Chuck Lever To: linux-nfs@vger.kernel.org Date: Mon, 01 Mar 2021 10:17:31 -0500 Message-ID: <161461185115.8508.17657260621552734607.stgit@klimt.1015granger.net> In-Reply-To: <161461145466.8508.13379815439337754427.stgit@klimt.1015granger.net> References: <161461145466.8508.13379815439337754427.stgit@klimt.1015granger.net> User-Agent: StGit/1.0-5-g755c MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org During NFSv2 and NFSv3 READDIR/PLUS operations, NFSD advances rq_next_page to the full size of the client-requested buffer, then releases all those pages at the end of the request. The next request to use that nfsd thread has to refill the pages. NFSD does this even when the dirlist in the reply is small. With NFSv3 clients that send READDIR operations with large buffer sizes, that can be 256 put_page/alloc_page pairs per READDIR request, even though those pages often remain unused. We can save some work by not releasing dirlist buffer pages that were not used to form the READDIR Reply. I've left the NFSv2 code alone since there are never more than three pages involved in an NFSv2 READDIR Reply. Eventually we should nail down why these pages need to be released at all in order to avoid allocating and releasing pages unnecessarily. Signed-off-by: Chuck Lever --- fs/nfsd/nfs3proc.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c index bc64e95a168d..17715a6c7a40 100644 --- a/fs/nfsd/nfs3proc.c +++ b/fs/nfsd/nfs3proc.c @@ -493,6 +493,9 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp) memcpy(resp->verf, argp->verf, 8); nfs3svc_encode_cookie3(resp, offset); + /* Recycle only pages that were part of the reply */ + rqstp->rq_next_page = resp->xdr.page_ptr + 1; + return rpc_success; } @@ -533,6 +536,9 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp) memcpy(resp->verf, argp->verf, 8); nfs3svc_encode_cookie3(resp, offset); + /* Recycle only pages that were part of the reply */ + rqstp->rq_next_page = resp->xdr.page_ptr + 1; + out: return rpc_success; }