Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp307596pxf; Thu, 25 Mar 2021 04:45:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyJzoD8mn2L1kvylln78o0+uRSjTRvf7+SwO7e2NHC0GS77onc/4BNJLE9r/oLKzAIv++7+ X-Received: by 2002:a17:906:90c5:: with SMTP id v5mr8888504ejw.466.1616672703302; Thu, 25 Mar 2021 04:45:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616672703; cv=none; d=google.com; s=arc-20160816; b=HAUBlIbgyOoF7vw2wru69tM7fR+GaP2FQwgZLhDHB6Ul7LkYJ/JYPhWPmJpEjUA6YM fsjNeZiiay9cn1g3pM18D0v6qiCol4nAepykyrgAM5Ry7f3UBLb6d8c035dtupnyncvP VC+b73Hj2Qibpg1cHmdHGr4YftElXvERtfDhhz/zD+UXhPfFf2SyK9Vsha5nBL10iEV3 ptOkCzyDVKgdJnQmT5PI/GV0Cqj09QlLsIBX7gAlUsyPfzJ5oLH85pniR6Q74PcIcUH+ F0sOOE9HKXpIKg8fmZvxiBU/kHmFhgDAEalUQGztaMU0ZtgrWQpqbm8cF2EhUmGJoBiX Q9yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Q2UfKirqP4EguRUOuvcQWm3veKXyHR3HNFDPWxuhJNY=; b=0n7xhkacf+UZePfimr3HPWXdbOZ5e/ios0gmqpXgylDNds75gvZ25/+9J2oYXp9EdE l2mSl1274YBN/tJBqF0wSzhPBkmFgp9QzsurZPPEPyvOSUJE95P6LxEMsnLVCiAeQsDN aV8AJye8CSBUmXEJ65d/qEsioqd1SSM8IW+AahaxDLyiG0+j+Kb8Rxw07+FQtnN6z544 wOjxIfP7oLv93pr2GEmfgCyo4TUnbX9wiM6nEmoAuCP9G0FA9zTeuqS9CJbtiBfpmlXi 0GhvmOoGtvd6PSB9A/aJcqj0vedihdSmpZa/Ja8qCIYZbSqkxHYyoKfrtVV5uC21FSWo 2yyg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f18si4104978eja.587.2021.03.25.04.44.40; Thu, 25 Mar 2021 04:45:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230185AbhCYLoF (ORCPT + 99 others); Thu, 25 Mar 2021 07:44:05 -0400 Received: from outbound-smtp26.blacknight.com ([81.17.249.194]:56019 "EHLO outbound-smtp26.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231488AbhCYLnn (ORCPT ); Thu, 25 Mar 2021 07:43:43 -0400 Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp26.blacknight.com (Postfix) with ESMTPS id 0A662CAB58 for ; Thu, 25 Mar 2021 11:43:41 +0000 (GMT) Received: (qmail 18382 invoked from network); 25 Mar 2021 11:43:40 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPA; 25 Mar 2021 11:43:40 -0000 From: Mel Gorman To: Andrew Morton Cc: Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Vlastimil Babka , Matthew Wilcox , Ilias Apalodimas , LKML , Linux-Net , Linux-MM , Linux-NFS , Mel Gorman Subject: [PATCH 6/9] SUNRPC: Set rq_page_end differently Date: Thu, 25 Mar 2021 11:42:25 +0000 Message-Id: <20210325114228.27719-7-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210325114228.27719-1-mgorman@techsingularity.net> References: <20210325114228.27719-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Chuck Lever Patch series "SUNRPC consumer for the bulk page allocator" This patch set and the measurements below are based on yesterday's bulk allocator series: git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git mm-bulk-rebase-v5r9 The patches change SUNRPC to invoke the array-based bulk allocator instead of alloc_page(). The micro-benchmark results are promising. I ran a mixture of 256KB reads and writes over NFSv3. The server's kernel is built with KASAN enabled, so the comparison is exaggerated but I believe it is still valid. I instrumented svc_recv() to measure the latency of each call to svc_alloc_arg() and report it via a trace point. The following results are averages across the trace events. Single page: 25.007 us per call over 532,571 calls Bulk list: 6.258 us per call over 517,034 calls Bulk array: 4.590 us per call over 517,442 calls This patch (of 2) Refactor: I'm about to use the loop variable @i for something else. As far as the "i++" is concerned, that is a post-increment. The value of @i is not used subsequently, so the increment operator is unnecessary and can be removed. Also note that nfsd_read_actor() was renamed nfsd_splice_actor() by commit cf8208d0eabd ("sendfile: convert nfsd to splice_direct_to_actor()"). Signed-off-by: Chuck Lever Signed-off-by: Mel Gorman --- net/sunrpc/svc_xprt.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 3cdd71a8df1e..609bda97d4ae 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -642,7 +642,7 @@ static void svc_check_conn_limits(struct svc_serv *serv) static int svc_alloc_arg(struct svc_rqst *rqstp) { struct svc_serv *serv = rqstp->rq_server; - struct xdr_buf *arg; + struct xdr_buf *arg = &rqstp->rq_arg; int pages; int i; @@ -667,11 +667,10 @@ static int svc_alloc_arg(struct svc_rqst *rqstp) } rqstp->rq_pages[i] = p; } - rqstp->rq_page_end = &rqstp->rq_pages[i]; - rqstp->rq_pages[i++] = NULL; /* this might be seen in nfs_read_actor */ + rqstp->rq_page_end = &rqstp->rq_pages[pages]; + rqstp->rq_pages[pages] = NULL; /* this might be seen in nfsd_splice_actor() */ /* Make arg->head point to first page and arg->pages point to rest */ - arg = &rqstp->rq_arg; arg->head[0].iov_base = page_address(rqstp->rq_pages[0]); arg->head[0].iov_len = PAGE_SIZE; arg->pages = rqstp->rq_pages + 1; -- 2.26.2